score
stringclasses 605
values | text
stringlengths 4
618k
| url
stringlengths 3
537
| year
int64 13
21
|
---|---|---|---|
score | text | url | 13 |
34 | Topics covered: Exceptions to Lewis structure rules; Ionic bonds
Instructor: Catherine Drennan, Elizabeth Vogel Taylor
Lecture Notes (PDF - 1.1MB)
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu.
PROFESSOR: OK, let's get started here. Go ahead and take 10 more seconds on the clicker question, which probably looks all too familiar at this point, if you went to recitation yesterday. All right, and let's see how we do here.
OK. So, let's talk about this for one second. So what we're asking here, if we can settle down and listen up, is which equations can be used if we're talking about converting wavelength to energy for an electron. Remember, the key word here is electron. This might look familiar to the first part of problem one on the exam, and problem one on the exam is what tended to be the huge problem on the exam. I think over 2/3 of you decided on the exam to use this first equation, e equals h c over wavelength.
So I just want to reiterate one more time, why can we not use this equation if we're talking about an electron? C. OK, good, good, I'm hearing it. So the answer is c. What you need to do is you need to ask yourself if you're trying to convert from wavelength to energy for an electron, and you are tempted, because we are all tempted to use this equation, and if you were tempted, say, does an electron travel at the speed of light? And if the answer is no, an electron does not travel at the speed of light, light travels at the speed of light, then you want to stay away from using this equation. And I know how tempting it is to do that, but we have other equations we can use -- the DeBroglie wavelength, and this is just a combination of energy equals 1/2 m v squared, and the definition of momentum, so we can combine those things to get it.
You might be wondering why I'm telling you this now, you've already -- if you've lost points on that, lost the points on it, and what I'm saying to you is if there are parts of exam 1 that you did not do well on, you will have a chance to show us again that you now understand that material on the final. One quarter of the final is going to be exam 1 material, and what that means is when we look at your grade at the end of the semester, and we take a look at what you got on exam 1, and you're right at that borderline, and we say well, what happened, did they understand more at the end of the semester, did the concepts kind of solidify over the semester? And if they did and if you showed us that they did, then you're going to get bumped up into that next grade category.
So keep that in mind as you're reviewing the exam, sometimes if things don't go as well as you want them to, the temptation is just to put that exam away forever and ever. But the reality is that new material builds on that material, and specifically exam 1 a, question 1 a that deals with converting wavelength to energy for an electron. I really want you guys know this and to understand it, so I can guarantee you that you will see this on the final. Specifically, question 1, part a. You will see something very, very similar to this on the final. If you are thinking about 1 thing to go back and study on exam 1, 1 a is a really good choice for that. This is important to me, so you're going to see it on the final.
So if you have friends that aren't here, you might want to mention it to them, or maybe not, maybe this is your reward for coming to class, which is fine with me as well.
All right. So I want to talk a little bit about exam 1. I know most you picked up your examine in recitation. If you didn't, any extra exams can be picked up in the Chemistry Education office, that's room 2204.
So, the class average for the exam was a 68%, which is actually a strong, solid average for an exam 1 grade in the fall semester of 511-1. What we typically see is something right in this range, either ranging from the 50's for an exam 1 average to occasionally getting into the 70's, but most commonly what we've seen for exam 1 averages is 60, 61 -- those low 60's. So in many ways, seeing this 68 here, this is a great sign that we are off to a good start for this semester. And I do want to address, because I know many of you, this is only your second exam at MIT, and perhaps you've never gotten an exam back that didn't start with a 90 or start with an 80 in terms of the grades. So one thing you need to keep in mind is don't just look at the number grade. The reason that we give you these letters grade categories is that you can understand what it actually means, what your exam score actually says in terms of how we perceive you as understanding the material.
So, for example, and this is the same categories that were shared in recitation, so I apologize for repeating, but I know sometimes when you get an exam back, no more information comes into your head except obsessing over the exam, so I'm just going to say it one more time, and that is between 88 and 100, so that's 20% of you got an A. This is just absolutely fantastic, you really nailed this very hard material and these hard questions on the exam where you had to both use equations and solve problems, but also understand the concept in order to get yourself started on solving the problem.
The same with the B, the B range was between 69 and 87 -- anywhere in between those ranges, you've got a B, some sort of B on the exam. So again, if you're in the A or the B category here, this is really something to be proud of, you really earned these grades. You know these exams, our 511-1 exams, we're not giving you points here, there are no give me, easy points, you earned every single one of these points. So, A and B here, these are refrigerator-worthy grades, hang those up in your dorm. This is something to feel good about.
All right. So, for those of you that got between a 51 and a 68, this is somewhere in the C range. For some people, they feel comfortable being in the C range, other people really do not like being in this range. We understand that, there is plenty of room up there with the A's and the B's. You are welcome to come up to these higher ranges starting with the next exam. And what I want to tell you if you are in the C range, and this is not a place that you want to be in, anyone that's got below the class average, so below a 68 -- or a 68 or below, is eligible for free tutoring, and I put the website on the front page of your notes. This means you get a one-on-one tutor paid for by the Chemistry Department to help you if it's concepts you're not quite up on, if it's exam strategy that you need to work on more. Whatever it is that you need to work on, we want to help you get there.
So, if you have a grade that you're not happy with, that you're feeling upset or discouraged about, please, I'm happy to talk to all of you about your grades individually. You can come talk to me, bring your exam, and we'll go over what the strategy should be in terms of you succeeding on the next exam. You can do the same thing with all of your TAs are more than happy to meet with each and every one of you. And then in addition to that, we can set you up with a tutor if you are in the C range or below, in terms of this first exam.
All right. So 44 to 50, this is going to be in the D range. And then anything below a 44 is going to be failing on this exam. And also keep in mind, for those of you that are freshman, you need at least a C to pass the class. So, if you did get a D or an F on the first exam, you are going to need to really evaluate why that happened and make some changes, and we're absolutely here to help you do that. So the real key is identifying where the problem is -- is it with understanding the concepts, are you in a study group that's dragging you along but you're not understanding? Do you kind of panic when you get in the exam? There are all sorts of scenarios we can talk about and we want to talk about them with you.
Seriously, even if we had a huge range in this exam from 17 to 100, if you're sitting there and you're the 17, and actually there's more than 1 so don't feel alone, if you're a 17 or you're a 20, it's not time to give up, it's not time to drop the class and say I'm no good at chemistry, I can't do this. You still can, this is your first couple of exams, certainly your first in this class, potentially one of your first at MIT, so there's tons of room to improve from here on out. This is only 100 points out of 750. So, the same thing goes if you did really well, you still have 650 other points that you need to deal with. So, make sure you don't just rest on your high score from this first exam.
So, OK, so that's pretty much what I wanted to say about the exam, and in terms of there's tons of resources if things didn't work out quite as you wanted. If you feel upset in any way, please come and talk to me. We want you to love chemistry and feel good about your ability to do it. Nobody get into MIT by mistake, so you all deserve to be sitting here, and you all can pass this class and do well in it, so we can help you get there no matter what. You all absolutely can do this.
And then one more time, to reiterate, in case anyone missed it, 1 a, make sure you understand that, I feel like that's important. And actually all of 1 -- I really feel like the photoelectric effect is important for understanding all of these energy concepts. So, as you go on in this class, make sure you don't go on before you go back and make sure you understand that problem.
All right, so let's move on to material for exam 2 now, and we're already three lectures into exam 2 material. And I do want to say that in terms of 511-1, what tends to happen is the exam scores go up and up and up, in terms of as we go from exam 1, to exam 2, to exam 3. One of these reasons is we are building on material, the other reason is you'll be shocked at how much better you are at taking an exam just a few weeks from now. So this will be on, starting with the Lewis structures, so go back in your notes -- if this doesn't sound familiar, if you spent too much time -- or not too much time, spent a lot of time studying exam 1 and didn't move on here.
Today we're going to talk about the breakdown of the octet rule. Cases where we don't have eight electrons around our Lewis structures, then we'll move on to talking about ionic bonds. We had already talked about covalent bonds, and then we talked about Lewis structures, which describe the electron configuration in covalent bonds. So now let's think about the other extreme of ionic bonds, and then we'll talk about polar covalent bonds to end, if we get there or will start with that in class on Monday.
Also, public service announcement for all of you, voter registration in Massachusetts, which is where we are, is on Monday, the deadline if you want to register to vote. There's some websites up there that can guide you through registering and also can guide you through, if you need an absentee ballot for your home state. And I actually saw, and I saw a 5.111 student manning, there's some booths around MIT that will register you or get you an absentee ballot. So, the deadline's coming soon, so patriotic duty, I need to remind you of that as your chemistry teacher -- chemistry issues are important in politics as well. So make sure you get registered to vote.
I just remembered one more announcement, too, that I did want to mention, some of you may have friends in 511-2 and have heard their class average for exam 1. And I want to tell you, this happens every year, their average was 15 points higher than our average. Last year, their average was 15 points higher than our average. This is for exam 1. This is what tends to happen to 511-2 grades as the exam goes on. This is what happens to 511-1. You guys are in a good spot. Also, I want to point out that what's not important is just that number grade, but also the letter that goes with it.
So, for example, if you got a 69 in this class on this exam, that's a B minus. If you got a 69 on your exam in 511-2, that's a D, you didn't pass the exam. So keep that in mind when your friend might have gotten a higher number grade than you and you know you understand the similar material just as well. Similarly, an 80 in this class on the exam was a B plus, a very high B. An 80 in that class is going to be a C. So, just don't worry so much about exactly where that average lies, you really want to think about what the letter grade means. OK, I've said enough. I just -- I hate to see people discouraged, and I know that a few people have been feeling discouraged, so that's my long-winded explanation of exam 1 grades.
All right. So, let's move on with life though, so talking about the breakdown of the octet rule. The first example where we're going to see a breakdown is any time we have an odd number of valence electrons. This is probably the easiest to explain and to think about, because if we have an odd number that means that we can't have our octet rule, because our octet rule works by pairing electrons. And if we have an odd number, we automatically have an odd electron out.
So, if we look at an example, the methyl radical, we can first think about how we draw the Lewis structure -- we draw the skeletal structure here. And then what we're going to do is add up our valence electrons -- we have 3 times 1 for the hydrogen atoms, carbon has 4 valence electrons, so we have a total of 7. If we want to fill all of our valence shells in each of these atoms, we're going to need a total of 14 electrons. So, what we see we're left with is that we have 7 bonding electrons. So we can fill in 6 of those straightforward here, because we know that we need to make 3 different bonds. And now we're left over with 1 electron, we can't make a bond.
So, what we'll do is carbon does not have an octet yet. We can't get it one, but we can do the best we can and help it out with adding that extra electron onto the carbon atom, so that at least we're getting as close as possible to filling our octets.
This is what we call a radical species or a free radical. Free radical or radical species is essentially any type of a molecule that has this unpaired electron on one of the atoms. This might look really strange, we're used to seeing octets. But you'll realize, if you calculate the formal charge on this molecule, that it's not the worst situation ever for carbon. At least it's formal charge is zero, even if it doesn't have -- it would rather have an extra bond and have a full octet. But it's not the worst scenario that we can imagine. But still, radicals tend to be incredibly reactive because they do want to fill that octet.
So, what happens when you have a radical is it tends to react with the first thing that it runs into, especially highly reactive radicals that are not stabilized in some other way, which you'll tend to talk about it organic chemistry -- how you can stabilize radicals.
So the term free radical should sound familiar to you, whether you've heard it in chemistry before, or you haven't heard it in chemistry, but maybe have heard it, I don't know, commercials for facial products or other things. People like to talk about free radicals, and they're sort of the hero that gets rid of free radicals, which are antioxidants. So you hear in a lot of different creams or products or vitamins that they have antioxidants in them, which get rid of free radicals. The reason you would want to get rid of free radicals is that free radicals can damage DNA, so they're incredibly reactive. It makes sense that if they hit a strand of DNA, they're going to react with the DNA, you end up breaking the strands of DNA and causing DNA damage.
So, this is actually what happens in aging because we have a lot of free radicals in our body. We can introduce them artificially, for example, cigarette smoke has a lot of really dangerous free radicals that get into the cells in your lungs, which damage your lung DNA, which can cause lung cancer. But also, all of us are living and breathing, which means we're having metabolism go on in our body, which means that as we use oxygen and as we metabolize our food, we are actually producing free radicals as well. So it's kind of a paradox because we need them because they are a natural by-product of these important processes, but then they can go on and damage cells, which is what kind of is causing aging and can lead to cancer.
We have enzymes in our body that repair damage that is done by free radicals, that will put the strands of DNA back together. And we also have antioxidants in our body. So, you might know that, for example, very brightly colored fruit is full of antioxidants, they're full of chemicals that will neutralize free radicals. Lots of vitamins are also antioxidants, so we have vitamin A on the top there and vitamin E.
So, the most common thing we think of when we think of free radicals is very reactive, bad for your body, causes DNA damage. But the reality is that free radicals are also essential for life. So this is kind of interesting to think about. And, for example, certain enzymes or proteins actually use free radicals in order to carry out the reactions that they carry out in your body. So, for example, this is a picture or a snapshot of a protein, this is a crystal structure of ribonucleotide reductase is what it's called. It's an enzyme that catalyzes the reaction of an essential step in both DNA synthesis and also DNA repair, and it requires having radicals within its active site in order to carry out the chemistry.
So, this is kind of a neat paradox, because radicals damage DNA, but in order to repair your DNA, you need certain enzymes, and those enzymes require different types of free radicals. So, free radicals are definitely very interesting, and once we get -- or hopefully you will get into organic chemistry at some point and get to really think about what they do in terms of a radical mechanism.
We can think about radicals that are also more stable, so let's do another example with the molecule nitric acid. So we can again, draw the skeleton here, and just by looking at it we might not know it's a radical, but as we start to count valence electrons, we should be able to figure it out very quickly, because what we have is 11 valence electrons. We need 16 electrons to have full octets. So, we're left with 5 bonding electrons. We put a double bond in between our nitrogen and our oxygen, so what we're left over with is this single bonding electron, and we'll put that on the nitrogen here. And I'll explain why we put it on the nitrogen and not the oxygen in just a minute.
But what we find is then once we fill in the rest of the valence electrons in terms of lone pairs, this is the structure that we get. And if you add up all of the formal charges on the nitrogen and on the oxygen, what you'll see is they're both 0. So if you happen to try drawing this structure and you put the lone pair on oxygen and then you figured out the formal charge and saw that you had a split charge, a plus 1 and a minus 1, the first thing you might want to try is putting it on the other atom, and once you did that you'd see that you had a better structure with no formal charge.
I have to mention what nitric oxide does, because it's a very interesting molecule. Don't get it confused with nitrous oxide, which is happy gas, that's n o 2. This is nitric oxide, and it's actually much more interesting than nitrous oxide. It's a signaling molecule in your body, it's one of the very few signaling molecules that is a gas, and obviously, it's also a radical. What happens with n o is that it's produced in the endothelium of your blood vessels, so the inner lining of your blood vessels, and it signals for smooth muscle that line your blood vessels to relax, which causes vasodilation , and by vasodilation, I just mean a widening of the blood vessels. So, n o signals for your blood vessels to get wider and allow more blood to flow through. And if you think about what consequences this could have, in terms of places where they have high altitude, so they have lower oxygen levels, do you think that they produce more or less and n o their body? More? Yeah, it turns out they do produce more. The reason they produce more is that they want to have more blood flowing through their veins so that they can get more oxygenated blood into different parts of their body.
N o is also a target in the pharmaceutical industry. A very famous one that became famous I guess over 10 years ago now, and this is from a drug that actually targets one of n o's receptors, and this drug has the net effect of vasodilation or widening of blood vessels in a certain area in the body. So this is viagra, some of you may be familiar, I think everyone's heard of viagra. Now you know how viagra works. Viagra breaks down, or it inhibits the breakdown of n o's binding partner in just certain areas, not everywhere in your body. So, in those areas, what happens is you get more n o signaling, you get more vasodilation, you get increased blood flow. So that's a little bit of pharmacology for you here today.
All right, so let's talk about one more example in terms of the breakdown of the octet rule with radicals. Let's think about molecular oxygen. So let's go ahead and quickly draw this Lewis structure. We have o 2. The second thing we need to do is figure out valence electrons. 6 plus 6, so we would expect to see 12. For a complete octet we would need 8 electrons each, so 16. So in terms of bonding electrons, what we have is 4 bonding electrons. So, we can go ahead and fill those in as a double bond between the two oxygens.
So, what we end up having left, and this would be step six then because five was just filling in that, is 12 minus 4, so we have 8 lone pair electrons left. So we can just fill it in to our oxygens like this.
All right, so using everything we've learned about Lewis structures, we here have the structure of molecular oxygen. And I just want to point out for anyone that gets confused, when we talk about oxygen as an atom, that's o, but molecular oxygen is actually o 2, the same for molecular hydrogen, for example.
All right, so let's look at what the actual Lewis structure is for molecular oxygen, and it turns out that actually we don't have a double bond, we have a single bond, and we have two radicals. And any time we have two radicals, we talk about what's called a biradical. And while using this exception to the Lewis structure rule, to the octet rule for odd numbers of valence electrons can clue us into the fact that we have a radical, there's really no way for us to use Lewis structures to predict when we have a biradical, right, because we would just predict that we would get this Lewis structure here.
So, when I first introduced Lewis structures, I said these are great, they're really easy to use and they work about 90% of the time. This falls into that 10% that Lewis structures don't work for us. It turns out, in order to understand that this is the electron configuration for o 2, we need to use something called molecular orbital theory, and just wait till next Wednesday and we will tell you what that is, and we will, in fact, use it for oxygen. But until that point, I'll just tell you that molecular orbital theory takes into account quantum mechanics, which Lewis theory does not. So that's why, in fact, there are those 10% of cases that Lewis structures don't work for.
All right, the second case of exceptions to the octet rule are when we have octet deficient molecules. So basically, this means we're going to have a molecule that's stable, even though it doesn't have a complete octet. And these tend to happen in group 13 molecules, and actually happen almost exclusively in group 13 molecules, specifically with boron and aluminum. So, any time you see a Lewis structure with boron or aluminum, you want to just remember that I should look out to make sure that these might have an incomplete octet, so look out for that when you see those atoms.
So, let's look at b f 3 as our example here. And what we see for b f 3 is the number of valence electrons that we have are 24, because the valence number of electrons for boron is 3, and then 3 times 7 for each fluorine. For total filled octets we need 32, so that means we need 8 bonding electrons. So, let's assign two to each bond here, and then we're going to have two extra bonding electrons, so let's just arbitrarily pick a fluorine to give a double bond to. And then we can fill in the lone pair electrons, we have 16 left over. So thinking about what the formal charge is, if we want to figure out the formal charge for the boron here, what we're talking about is the valence number for boron, which is 3, minus because there are no lone pairs, minus 1/2 of 8 because there are eight shared electrons. We get a formal charge of minus 1.
What is our formal charge since we learned this on Monday for thinking about the double bonded fluorine in boron? So, look at your notes and look at the fluorine that has a double bond with it, and I want you to go ahead and tell me what that formal charge should be.
All right, let's take 10 more seconds on that. OK, so 49%. So, let's go look back at the notes, we'll talk about why about 50% of you are right, and 50% need to review, which I totally understand you haven't had time to do yet, your formal charge rules from Monday's class, there were other things going on. But let's talk about how we figure out formal charge. Formal charge is just the number of valence electrons you have. So fluorine has 7. You should be able to look at a periodic table and see that fluorine has seven. What we subtract from that is the number of lone pair electrons, and there are four lone pair electrons on this double bonded fluorine, so it's minus 4. Then we subtract 1/2 of the shared electrons. Well we have a double bond with boron here, so we have a total of 4 shared electrons. And when we do the subtraction here, what we end up with is a formal charge plus 1 on the double bonded fluorine.
Without even doing a calculation, what do you think that the formal charge should be on you single bonded fluorines? Good. OK, it should be and it is 0. The reason it's zero in terms of calculating it is 7 minus 6 lone pair electrons minus 1/2 half of 2 shared electrons is 0. The reason that you all told me, I think, and I hope, is that you know that the formal charge on individual atoms has to equal the total charge on the molecule. So if we already have a minus 1 and a plus 1, and we know we have no charge in the molecule, and we only have one type of atom left to talk about, that formal charge had better be 0.
OK. So this looks pretty good in terms of a Lewis structure, we figured out our formal charges. These also look pretty good, too, we don't have too much charge separation. But what actually it turns out is that if you experimentally look at what type of bonds you have, it turns out that all three of the b f bonds are equal in length, and they all have a length that would correspond to a single bond. So, experimentally, we know we have to throw out this Lewis structure here, we have some more information, let's think about how this could happen.
So this could happen, for example, is if we take this two of the electrons that are in the b f double bond and we put it right on to the fluorine here, so now we have all single bonds. And let's think about what the formal charge situation would be in this case here. What happens here is now we would have a formal charge of on the boron, we'd have a formal charge of on all of the fluorine molecules as well. So, it turns out that actually looking at formal charge, even though the first case didn't look too bad, this case actually looks a lot better. We have absolutely no formal charge separation whatsoever. It turns out again, boron and aluminum, those are the two that you want to look out for. They can be perfectly happy without a full octet, they're perfectly happy with 6 instead of 8 in terms of electrons in their valence shell. So that is our exception the number two.
We have one more exception and this is a valence shell expansion, and this can be the hardest to look out for, students tend to forget to look for this one, but it's very important as well, because there are a lot of structures that are affected for this . And this is only applicable if we're talking about a central atom that has an n value or a principle quantum number that's equal to or greater than three. What happens when we have n that's equal to or greater to three, is that now, in addition to s orbitals and p orbitals, what else do we have available to us? D orbitals, great. So what we see is we have some empty d orbitals, which means that we can have more than eight electrons that fit around that central atom.
If you're looking to see if this is going to happen, do you think this would happen with a large or small central atom? So think of it in terms of just fitting. We've got to fit more than 8 electrons around here. Yeah, so it's going to be, we need to have a large central atom in order for this to take place. Literally, we just need to fit everything around is probably the easiest way to think about it. And what happens is it also tends to have small atoms that it's bonded to. Again, just think of it in terms of all fitting in there.
So, let's take an example p c l 5. The first example is the more straightforward example, because let's start to draw the Lewis structure, and what we see is that phosphorous has five chlorines around it. So we already know if we want to form five bonds we've broken our octet rule. But let's go through and figure this out and see how that happens.
What we know is we need 40 valence electrons, we have those -- 5 from the phosphorous, and we have 7 from each of the chlorine atoms. If we were to fill out all of those octets, that would be 48 electrons. So what we end up with when we do our Lewis structure calculation is that we only have 8 bonding electrons available to us. So we can fill those in between the phosphorous and the chlorine, those 8 bonding electrons.
So, this is obviously a problem. To make 5 p c l bonds we need 10 shared electrons, and we know that that's the situation because it's called p c l 5 and not p c l 4, so we can go right ahead and add in that extra electron pair. So we've used up 10 for bonding, so that means what we have left is 30 lone pair electrons, and I would not recommend filling all of these in your notes right now, you can go back and do that, but just know the rest end up filling up the octets for all of the chlorines.
So, in this first case where you actually need to make more than for bonds, you will immediately know you need to use this exception to the Lewis structure octet rule, but sometimes it won't be as obvious. So, let's look at c r o 4, the 2 minus version here, so a chromate ion, and if we draw the skeletal structure, we have four things that the chromate needs to bond to.
So, let's do the Lewis structure again. When we figure out the valence electrons, we have total, we have 6 from the chromium, we have 6 from each of the different oxygens, and where did this 2 come from? Yup, the negative charge. So, remember, we have 2 extra electrons hanging out in our molecule, so we need to include those. We have a total of 32. 40 are needed to fill up octets. So again, we have 8 bonding electrons available, so we can go ahead and fill these in between each of the bonds. What happens is that we then have 24 lone pair electrons left, and we can fill those in like this. And the problem comes now when we figure out the formal charge.
So, when we do that what we find is that the chromium has a formal charge of plus 1, and that each of the oxygens has a total charge of minus 1. So we actually have a bit of charge separation here. Without even doing a calculation, what is the total charge of these that are added up? OK, it's minus 2, that's right. We know that the total charge of each of the formal charges has to add up to minus 2, because that's the charge in our molecule. We can also just calculate it -- the chromate gives us a plus 2, then we have 4 times minus 1 for each of the oxygens, so we have a minus 2.
So, we have some charge separation here, and in some cases, if we're not at n equals 3 or higher, there's really nothing we can do about it, this would be the best structure we can do. But since we have these d orbitals available, we can use them, and it turns out that experimentally this is what's found, that the length and the strength are not single bonds, but they're actually something between a single bond and a double bond.
So how do we get a 1 and 1/2 bond, for example, what's the term that let's us do that? Resonance. That's right. So that's exactly what's happening here. So, if we went ahead and drew this structure here where we have now two double bonds and two single bonds, that would be in resonance with another structure where we have two double bonds instead to these two oxygens, and now, single bonds to these two oxygens. We can actually also have several other resonance structures as well. Remember, the definition of a resonance structure is where all the atoms stay the same, but what we can do is move around the electrons -- we're moving around those extra two electrons that can be in double bonds.
So, why don't you tell me how many other resonance structures you would expect to see for this chromate ion? All right, let's take 10 more seconds on this.
All right. This is good. I know this is a real split response, but the right answer is the one that is indicated in the graph here that it's four. This takes a little bit of time to get used to thinking about all the different Lewis structures you can have. So, you guys should all go back home if you can't see it immediately right now and try drawing out those four other Lewis structures, for chromate, there are four others. You'll probably get a chance to literally do this example in recitation where you draw out all four, but it's even better to make sure you understand it before you get to that point. So, we can go back to the class notes.
So it turns out there's four other Lewis structures, so basically just think about all the other different combinations where you can have single and double bonds, and when you draw those out, you end up with four. So, for every single one of these Lewis structures, we could figure out what the formal charges are, and what we would find is that it's on the chromium, it's for the double bonded oxygens, and it's going to be negative 1 for the single bonded oxygens.
So, what you can see is that in this situation, we end up having less formal charge separation, and that's what we're looking for, that's the more stable structure. So any time you can have an expanded octet -- an expanded valence shell, where you have n is equal to or greater than 3, and by expanding and adding more electrons into that valence shell, you lower the charge separation, you want to do that.
I also want to point out, I basically said there's 6 different ways we can draw this in terms of drawing all the resonance structures. You might be wondering if you have to figure out the formal charge for each structure individually, and the answer is no, you can pick any single structure and the formal charges will work out the same. So, for example, if you pick this structure and your friend picks this structure, you'll both get the right answer that there's just the negative 1 on the oxygens and no other formal charges in the molecule.
All right. So those are the end of our exceptions to the octet rule for Lewis structures, that's everything we're going to say about Lewis structures. And remember, that when we talk about Lewis structures, what they tell us is the electron configuration in covalent bonds, so that valence shell electron configuration. So we talked a lot about covalent bonds before we got into Lewis structures, and then how to represent covalent bonds by Lewis structures.
So now I'll say a little bit about ionic bonds, which are the other extreme, and when you have an ionic bond, what you have now is a complete transfer of either one or many electrons between two atoms. So the key word for covalent bond was electron sharing, the key word for ionic bonds is electron transfer. And the bonding between the two atoms ends up resulting from an attraction that we're very familiar with, which is the Coulomb or the electrostatic attraction between the negatively charged and the positively charged ions.
So let's take an example. The easiest one to think about is where we have a negative 1 and a positive 1 ion. So this is salt, n a c l -- actually lots of things are call salt, but this is what we think of a table salt. So, let's think about what we have to do if we want the form sodium chloride from the neutral sodium and chlorine atoms. So, the first thing that we're going to need to do is we need to convert sodium into sodium plus.
What does this process look like to you? Is this one of those periodic trends, perhaps? Can anyone name what we're looking at here? Exactly, ionization energy. So, if we're going to talk about the energy difference here, what we're going to be talking about is the ionization energy, or the energy it takes to rip off an electron from sodium in order to form the sodium plus ion. So, we can just put right here, that's 494 kilojoules per mole.
The next thing that we want to look at is chlorine, so in terms of chlorine we need to go to chlorine minus, so we actually need to add an electron. This is actually the reverse of one of the periodic trends we talked about. Which trend is that this is the reverse of? Electron affinity, right. Because if we go backwards we're saying how badly does chlorine want to grab an electron? Chlorine wants to do this very badly, and it turns out the electron affinity for chlorine is huge, it's 349 kilojoules per mole, but remember, we're going in reverse, so we need to talk about it as negative 349 kilojoules per mole.
So if we talk about the sum of what's happening here, what we need to do is think about going from the neutrals to the ions, so we can just add those two energies together, and what we end up with is plus 145 kilojoules per mole, in order to go from neutral sodium in chlorine to the ions.
So, the problem here is that we have to actually put energy into our system, so this doesn't seem favorable, right. What's favorable is when we actually get energy out and our energy gets lower, but what we're saying here is that we actually need to put in energy. So another way to say this is this process actually requires energy. It does not emit energy, it does not give off excess energy, it requires energy.
So, we need to think about how can we solve this problem in terms of thinking about ionic bonds, and the answer is Coulomb attraction. So there's one more force that we need to talk about, and that is when we talk about the attraction between the negatively and the positively charged ions, such that we form sodium chloride. So this process here has a delta energy, a change in energy of negative 589 kilojoules per mole. So that's huge, we're giving off a lot of energy by this attraction. So if we add up the net energy for all of this process, all we need to do is add negative 589 to plus 145. So what we end up getting is the net energy change is going to be negative 444 kilojoules per mole, so you can see that, in fact, it is very favorable for neutral sodium and neutral chloride to form sodium chloride in an ionic bond. And the net increase then, is a decrease in energy.
So, I just gave you the number in terms of what that Coulomb potential would be in attraction, but we can I easily calculate it as well using this equation here where the energy is equal to the charge on each of the ions, and this is just multiplied by the value of charge for an electron divided by 4 pi epsilon nought times r, are r is just the distance in terms of the bond length we could talk about.
So, let's calculate and make sure that I didn't tell you a false number here. Let's say we do the calculation with the bond length that we've looked up, which is 2 . 3 6 angstroms for the bond length between sodium and chloride. So we should be able to figure out the Coulombic attraction for this.
So, if we talk about the energy of attraction, we need to multiply plus 1, that's the charge on the sodium, times minus 1, the charge on the chlorine, times the charge in an electron, 1 . 6 2 times 10 the negative 19 Coulombs, and that's all divided by 4 pi, and then I've written out epsilon nought in your notes, so I won't write it on the board. And then r, so r is going to be 2 . 3 6 and times -- what is angstrom, everyone? Yup, 10 to the negative 10. So 10 to the negative 10 meters. So, if we do this calculation here, what we end up with is negative 9.774 times 10 to the negative 19 joules.
So that's what we have in terms of our energy. That does not look the same as what we saw -- yup, do you have a question?
PROFESSOR: OK. Luckily, although, I did not write it in my own notes, I did it when I put in my calculator, thank you. So you need to square this value here and then you should get this value right here, negative 9.77.
All right, so what we need to do though is convert from joules into kilojoules per mole, because that's what we were using. So if we multiply that number there by kilojoules per mole -- or excuse me, first kilojoules per joule, so we have 1,000 joules in every kilojoule. And then we multiply that by Avagadro's number, 6.022 times 10 to the 23 per mole. What we end up with is negative 589 kilojoules per mole. So this is that same Coulombic attraction that we saw in the first place.
So, notice that you will naturally get out a negative charge here, remember negative means an attractive force in this case, because you have the plus and the minus 1 in here. So we should be able to easily do that calculation, and what we end up getting matches up with what I just told you, luckily, and thank you for catching the square, that's an important part in getting the right answer. So, experimentally then, what we find is that the change in energy for this reaction is negative 444 kilojoules per mole.
If we look experimentally what we see, it's actually a little bit different, it's negative 411 kilojoules per mole. So, in terms of this class, this is the method that we're going to use, and we're going to say this gets us close enough such that we can make comparisons and have a meaningful conversations about different types of ionic bonds and the attraction between them.
But let's think about where this discrepancy comes from, and before I do that I want to point out, one term we use a lot is change in energy for a reaction where, for example, you break a bond. Remember that the negative of the change in energy is what's called delta e sub d. We first saw this when we first introduced the idea of covalent bonds. Do you remember what this term here means, delta e sub d? A little bit and some no's, which this was pre-exam, I understand, you still need to review those notes, it's dissociation energy. So you get a negative energy out by breaking the bond. The dissociation energy means how much energy that bond is worth in terms of strength, so it's the opposite of the energy you get out of breaking the bond -- or excuse me, the energy that you get out of forming the bond. It's the amount of energy you need to put in to break the bond is dissociation energy. It takes this much energy to dissociate your bond, excuse me.
All right. So, let's take a look here at our predictions, so I just put them both ways so we don't get confused. The dissociation energy is 444. The change in energy for forming the bond is negative 444. We made the following approximations, which explain why, in fact, we got a different experimental energy, if we look at that.
The first thing is that we ignored any repulsive interactions. If you think about salt, it's not just two single atoms that you are talking about. It's actually in a whole network or whole lattice of other molecules, so you actually have some other chlorines around that are going to be having repulsive interactions with our chlorine that we're talking about. We're going to ignore those, make the approximation that those don't matter, at this point, in these calculations. And the result for that is that we end up with a larger dissociation energy than the experimental value. That's because the bond is going to be a little bit more broken than it was in our calculation, because we do have these repulsive interactions.
The other thing that we did is that we treated both sodium and the chlorine as point charges. And this is what actually allowed us to make this calculation and calculate the Coulomb potential so easily, we just treated them as if they're point charges. We're ignoring quantum mechanics in this -- this is sort of the class where we ignore quantum mechanics, we ignored it for Lewis structures, we're ignoring it here. We will be back to paying a lot of attention to quantum mechanics in lecture 14 when we talk about MO theory, but for now, these are approximations, these are models where we don't take it into consideration. And I think you'll agree that we come reasonably close such that we'll be able to make comparisons between different kinds of ionic bonds.
All right. So, the last thing I want to introduce today is talking about polar covalent bonds. We've now covered the two extremes. One extreme is complete total electron sharing -- if we have a perfectly covalent bond, we have perfect sharing. The other is electron transfer in terms of ionic bonds. So when we talk about a polar covalent bond, what we're now talking about is an unequal sharing of electrons between two atoms.
So, this is essentially something we've seen before, we just never formally talked about what we would call it. This is any time you have a bond forming between two non-metals that have different electronegativities, so, for example, hydrogen choride, h c l. The electronegativity for hydrogen is 2.2, for chlorine it's 3.2. And in general, what we say is we consider a difference in terms of a first approximation if the difference in electronegativity is more than 0. 5, so this is on the Pauling electronegativity scale. So what we end up having is we sort of have a kind of, and what we call it is a partial negative charge on the chlorine, and a partial positive charge in the hydrogen. The reason we have that is because the chlorine's more electronegative, it wants to pull more of that shared electron density to itself. If it has more electron density, it's going to have a little bit of a negative charge and the hydrogen's going to be left with a little bit of a positive charge.
So, we can compare this, for example to, molecular hydrogen where they're going to have that complete sharing, so there's not going to be a delta plus or a delta minus, delta is going to be equal to zero on each of the atoms. They are completely sharing their electrons.
And we can also explain this in another way by talking about a dipole moment where we have a charged distribution that results in this dipole, this electric dipole. And we talk about this using the term mu, which is a measurement of what the dipole is. A dipole is always written in terms of writing an arrow from the positive charge to the negative charge. In chemistry, we are always incredibly interested in what the electrons are doing, so we tend to pay attention to them in terms of arrows. Oh, the electrons are going over to the chlorine, so we're going to draw our arrow toward the chlorine atom.
So, we measure this here, so mu is equal to q times r, the distance between the two. And q, that charge is just equal to the partial negative or the partial positive times the charge on the electron. So this is measured in Coulomb meters, you won't ever see a measurement of electronegativity in Coulomb meters -- we tend to talk about it in terms of debye or 1 d, or sometimes there's no units at all, so the d is just assumed, and it's because 1 debye is just equal to this very tiny number of Coulomb meters and it's a lot easier to work with debye's here.
So, when we talk about polar molecules, we can actually extend our idea of talking about polar bonds to talking about polar molecules. So, actually let's start with that on Monday. So everyone have a great weekend. | http://ocw.mit.edu/courses/chemistry/5-111-principles-of-chemical-science-fall-2008/video-lectures/lecture-12/ | 13 |
17 | CHALCOLITHIC ERA in Persia. Chalcolithic (< Gk. khalkos “copper” + lithos “stone”) is a term adopted for the Near East early in this century as part of an attempt to refine the framework of cultural developmental “stages” (Paleolithic, Mesolithic, Neolithic, Bronze, and Iron Ages) and used by students of western European prehistory (E. F. Henrickson,1983, pp. 68-79). In Near Eastern archeology it now generally refers to the “evolutionary” interval between two “revolutionary” eras of cultural development: the Neolithic (ca. 10,000-5500 b.c.e., but varying from area to area), during which techniques of food production and permanent village settlement were established in the highlands and adjacent regions, and the Bronze Age (ca. 3500-1500 b.c.e., also varying with the area), during which the first cities and state organizations arose.
Although archeologists have devoted less attention to the Chalcolithic, it was an era of fundamental economic, social, political, and cultural development, made possible by the economic advances of the Neolithic and providing in turn the essential basis for the innovations of the Bronze Age. The era can be divided into three general phases, Early, Middle, and Late Chalcolithic, approximately equivalent respectively to the Early, Middle, and Late Village periods identified by Frank Hole (1987a; 1987b; for more detailed discussion of the internal chronology of the Persian Chalcolithic, see Voigt; idem and Dyson). Those aspects most directly attested by archeological evidence (primarily demographic and economic) will be emphasized here, with some attention to less clearly identifiable social, political, and ideological trends. Persia is essentially a vast desert plateau surrounded by discontinuous habitable areas, limited in size and ecologically and geographically diverse, few of them archeologically well known, especially in the eastern half of the country. The evidence is highly uneven and drawn primarily from surveys and excavations in western and southwestern Persia.
Settlement patterns. It is remarkable that in so geographically diverse and discontinuous a country a single distinctive pattern of settlement development characterized the Chalcolithic era in most of the agriculturally exploitable highland valleys and lowland plains that have been surveyed. During the early phase most habitable areas were sparsely settled; small, undifferentiated village sites were located near streams or springs. This pattern was essentially an extension of the prevailing Neolithic settlement pattern and in a few areas (e.g., northwestern Iran; Swiny) appears to have continued throughout the Chalcolithic. In the great majority of the arable mountain valleys and lowland plains, however, it developed in several significant ways through the Middle and Late Chalcolithic. The number of villages increased substantially (in many areas strikingly so) at the end of the Early and especially in the Middle Chalcolithic; then, in the Late Chalcolithic the trend was abruptly reversed, and the number of permanent settlements had dropped precipitously by the end of the era. On the Susiana plain, an eastern extension of the Mesopotamian lowlands in southwestern Persia, Hole (1987a, p. 42) recorded sixteen sites of the Early (= Susiana a) and eighty-six of the Middle Chalcolithic (= Susiana d). In the Late Chalcolithic the number declined to fifty-eight (= early Susa A), then thirty-one (= later Susa A), and finally eighteen (= terminal Susa A). In the much smaller and slightly higher adjacent Deh Luran (Dehlorān) plain the pattern was similar but developed somewhat earlier. Fewer than ten settlement sites were recorded from the early phase of Early Chalcolithic (Chogha Mami Transitional phase 5, Sabz phase 8), approximately twenty from the later Early and early Middle Chalcolithic (Khazineh [Ḵazīna] phase 20, Mehmeh 18), and a steady decline through later Middle and Late Chalcolithic, with only a few permanent settlements by the end of the era (Bayat 14, Farukh [Farroḵ] 12, Susa A 5, Sargarab [Sargarāb]/Terminal Susa A 2; Hole, 1987a; idem, 1987b, p. 100). The best survey data available from southern Persia come from the Marvdašt plain in the broad Kor river drainage basin (Sumner, 1972; idem, 1977) and the smaller Fasā and Dārāb plains (Hole, 1987a, pp. 52-55; idem, 1987b, p. 101). In all three areas the overall settlement pattern was the same: The number of villages increased gradually through the Neolithic and the Early Chalcolithic to an impressive peak in the Middle Chalcolithic Bakun (Bakūn) period (e.g., 146 sites in the Kor river basin), only to drop off dramatically during the Late Chalcolithic and Bronze Age levels. In a survey of the Rūd-e Gošk (Kūšk) near Tepe Yahya (Yaḥyā) Martha Prickett (1976; 1986) found a similar pattern, with the peak in the Yahya VA phase and the sharp drop immediately afterward in the Aliabad (ʿAlīābād) phase (both Late Chalcolithic). In the central Zagros highlands of western Persia the three most comprehensively surveyed valleys revealed a generally similar settlement pattern, though the timing of the peak differed somewhat. In the Māhīdašt, one of the broadest and richest stretches of arable level land in the Zagros, alluviation has added as much as 10m to the late prehistoric land surface, and many Chalcolithic sites are undoubtedly still buried (Brookes et al.). Nevertheless, the number of known villages shows a marked increase from the Neolithic (ten in Sarāb) to the Early Chalcolithic; an abrupt and complete change in the ceramic assemblage, with the appearance at seventy sites of J ware, showing definite generic influence of Halaf (Ḥalaf) pottery in neighboring Mesopotamia (See ceramics iv. the chalcolithic period in the zagros), suggests that the increase may have been caused by an influx of people from the north and west. In the Middle Chalcolithic the number of sites at which black-on-buff and related monochrome-painted wares were found rose sharply to a prehistoric peak of 134. A small number of sites yielded pottery from the purely highland Dalma (Dalmā) tradition, indicating another source of external cultural influence (E. F. Henrickson, 1986; idem, 1990; idem and Vitali). Some degree of indirect outside influence from the Ubaid (ʿObayd) culture of lowland Mesopotamia is also apparent in several of the locally made monochrome-painted wares (E. F. Henrickson, 1986; idem, 1990). In the Late Chalcolithic the flourishing village life in the Māhīdašt seems to have declined; only a handful of sites have yielded pottery characteristic of this period (E. F. Henrickson, 1983, chap. 6; idem, 1985b). Either the settled population dropped considerably at this time, owing to emigration, increased mortality, or adoption of a more mobile and less archeologically visible life style like pastoralism, or the monochrome-painted buff-ceramic tradition persisted until the end of the Chalcolithic. Definitive answers await further investigations in the field. In the Kangāvar valley, 100 km east of the Māhīdašt on the great road to Khorasan, the pattern was noticeably different from that in the western and southern Zagros. The number of villages rose from a single Neolithic example, Shahnabad (Šahnābād) on mound C at Seh Gabi (Se Gābī; McDonald) to twenty in the early Middle Chalcolithic (Dalma phase), located almost exclusively near the streams crossing the central valley floor. All these villages were small, typically covering about 0.5 ha. In the Middle and early Late Chalcolithic the number and location of sites remained relatively stable (seventeen in the Seh Gabi phase, twenty-three contemporary with Godin [Gowdīn] VII), even though the ceramics and other aspects of material culture changed abruptly between these two phases. This stability probably reflects a similar stability in subsistence strategy, as well as greater isolation from external cultural influences. Only toward the end of the Late Chalcolithic was there a notable increase in the number of villages (thirty-nine sites contemporary with Godin VI). The delayed and less marked population increase in Kangāvar, anomalous compared to most well-surveyed areas of western Persia, may have resulted from the cooler, drier climate, established from both ancient and modern ecological data and from the marked clustering of sites on the valley floor near sources of irrigation water (E. F. Henrickson, 1983, pp. 9-36, 466-68). Sociopolitical developments and external connections with the lowlands may also have accounted for a local increase or influx of population during the Godin VI period (E. F. Henrickson, forthcoming; Weiss and Young). The smaller and more marginal Holaylān valley south of the Māhīdašt has been more intensively surveyed. Permanent settlement peaked there in the Middle Chalcolithic; subsistence strategies appear to have become more diversified in the Late Chalcolithic, followed by a marked decline in preserved sites of all types. Peder Mortensen (1974; 1976) found three cave sites, one open-air site, and five village settlements dating to the Neolithic, reflecting a diverse and not completely sedentary system in which both the valley floor and the surrounding hills were exploited economically. Neither J nor Dalma wares were found that far south, and the developments in the Early and early Middle Chalcolithic are thus unclear. Eleven sites with Middle Chalcolithic black-on-buff pottery resembling Seh Gabi painted and Māhīdašt black-on-buff wares were recorded, all on the valley floor (Mortensen, 1976, fig. 11). By the early Late Chalcolithic settlement had again been diversified to include two open-air and two village sites in the hills, as well as seven villages on the valley floor, all yielding ceramics related to generic Susa A wares, including black-on-red; the number of sites remained quite stable (Mortensen, 1976, fig. 13, legend erroneously exchanged with that of fig. 12). The sharp decline in settlement occurred later; only two villages on the valley floor, two cave sites, and two open-air camps, all yielding ceramics related to those of Sargarab and Godin VI, are known (Mortensen, 1976, fig. 12), suggesting a destabilization of village life and a concomitant increase in pastoralism in this area, as in others where the same general pattern has been observed (E. F. Henrickson, 1985a).
Modest settlement hierarchies seem to have developed in some highland valleys during the Chalcolithic, though such geological processes as alluviation and water and wind erosion have undoubtedly obscured the evidence in some areas. Normally a few larger villages seem to have grown up among a preponderance of small villages. In the Māhīdašt the average size of sites without heavy overburden was 1.6 ha in the Early and just over 1 ha in the Middle Chalcolithic, but several sites covering more than 3 ha existed in both phases (E. F. Henrickson, 1983, pp. 458-60). Nothing more is known about these sites, as none have been excavated. Tepe Giyan (Gīān) in the Nehāvand valley was a relatively large highland site (in the 3-ha range) from Early Chalcolithic times; seals and copper objects were found there (Contenau and Ghirshman; Hole, 1987a, pp. 87-89). At Godin Tepe, a small town in the Bronze Age (R. Henrickson, 1984), the Chalcolithic is buried under deep Bronze and Iron Age overburden, and it is not known how large or important it was in relation to the rest of Kangāvar during most of that era (Young, 1969; idem and Levine). During the Late Chalcolithic, however, an oval enclosure (Godin V) was located there, the seat of an enclave of people from the lowlands apparently involved in long-distance commodity exchange, contemporary with the latter part of the prosperous period VI occupation at Godin and in Kangāvar generally (Weiss and Young; Levine and Young). Elsewhere in the central Zagros, especially in northeastern Luristan, several large and strategically located Late Chalcolithic sites developed just at the time when the number of smaller settlements was abruptly declining (Goff, 1966; idem, 1971). In the southwestern lowlands of Ḵūzestān the evolution of a settlement hierarchy progressed farther than anywhere else in Chalcolithic Persia. In Dehlorān two settlement centers grew up. In the Farukh phase of the Middle Chalcolithic Farukhabad (Farroḵābād), estimated to have originally covered approximately 2 ha, contained at least one thick-walled, elaborately bonded brick building, constructed on a low platform (Wright, 1981, pp. 19-21), and in the Susa A period of the Late Chalcolithic the large site of Mussian (Mūsīān; Gautier and Lamprey dominated Dehlorān. Farther south, on the Susiana plain, two “primate” settlement centers developed during the Chalcolithic. Chogha Mish (Čoḡā Mīš, q.v.) in the east flourished in the Middle Chalcolithic, when the number of sites on the plain reached its peak; it covered an area of 11 ha and included domestic architecture and at least one large, thick-walled monumental public building with buttresses, containing many small rooms, including a pottery storeroom and a possible flint-working room (Delougaz; Delougaz and Kantor, 1972; idem, 1975; Kantor, 1976a; idem, 1976b). The contemporaneous settlement at Jaffarabad (Jaʿfarābād) was a specialized pottery-manufacturing site with many kilns (Dollfus, 1975). After the demise of Chogha Mish the settlement on the acropolis at Susa in western Susiana gained prominence, developing into the most impressive Chalcolithic center yet known in Persia, with an area of approximately 20 ha. The high platform was about 70 m2 and stood more than 10 m high. Its brick facing was adorned with rows of inset ceramic “nails,” cylinders with flaring heads (Canal, 1978a; idem, 1978b). Fragmentary architectural remains atop the platform suggest storage rooms and a larger structure that may have been a temple (Steve and Gasche) but the evidence for its function is inconclusive (Pollock). Beside one corner of the terrace was a mortuary structure analogous to a mass mausoleum (de Morgan; de Mecquenem; Canal, 1978a), containing an unknown number of burials, recently estimated at 1,000-2,000 (Hole, 1987a, pp. 41-42; idem, 1990). This burial facility was apparently not intended only for the elite: Only some of the burials were in brick-lined tombs, and a wide range of grave goods were included with individual bodies, from ordinary cooking pots to luxury objects, particularly eggshell-thin Susa A fine painted-ware goblets and copper axes (Canal, 1978a; Hole, 1983). The acropolis at Susa was thus a unique multipurpose Chalcolithic settlement and ceremonial center, a focal point for the region. It may not have had a large resident population, but it nevertheless served a series of complex centralizing sociopolitical functions, presumably both religious and secular. Centers like Chogha Mish and Susa, like the late Ubaid center at Eridu, presaged the rise of the first true cities in the Mesopotamian lowlands in the subsequent Uruk period.
Strategies for subsistence. Irrigation appears to have been utilized throughout the arable highland valleys and lowland plains of Persia for the first time during the Middle Chalcolithic. The best-documented area is Dehlorān, where careful collection and interpretation of botanical, settlement, and geomorphological data by several different expeditions have resulted in an unusually clear picture both of flourishing irrigation agriculture and the subsequent abuse of the land and decline of permanent agricultural settlement in the Late Chalcolithic (Hole, Flannery, and Neely; Hole, 1977; Wright, 1975). Direct botanical evidence of Chalcolithic irrigation is not as rich for other sites in Persia, but in surveys of the Māhīdašt (Levine, 1974; idem, 1976; idem and McDonald), Kangāvar (Young, 1974), Susiana (Hole, 1987a; idem, 1987b), Kāna-Mīrzā (Zagarell), the Kor river basin (Sumner, 1983), and elsewhere linear alignment of contemporaneous sites along ancient watercourses provides strong indirect evidence. In the Rūd-e Gošk survey Prickett (1976) also noted a strong association between many Middle Chalcolithic (Yahya VB and VA) sites, on one hand, and alluvial fans and ancient terraces used for flood irrigation. Of course, not all Middle Chalcolithic villages required irrigation; many were located in areas with sufficient rainfall for dry farming.
In the western highlands there is strong evidence of specialized mobile pastoralism, apparently distinct from settled village farming, during the Middle and especially the Late Chalcolithic (E. F. Henrickson, 1985a). It includes the isolated Paṛčīna and Hakalān cemeteries in the Pošt-e Kūh, located far from any ancient village site (Vanden Berghe, 1973; idem, 1974; idem, 1975a; idem, 1975b; idem, forthcoming); an increased number of open-air and cave sites located near sometimes seasonal sources of fresh water, in Holaylān, Ḵorramābād (Wright et al.), the Pošt-e Kūh (Kalleh Nissar [Kalla-Nesār]; Vanden Berghe, 1973), the hinterlands south and east of Susiana, including Īza and Qaḷʿa-ye Tal (Wright, 1987), and the Baḵtīārī region (Zagarell); and the appearance of at least one distinctive pottery type, black-on-red ware, which was widely but sparsely distributed in Luristan, Ḵūzestān, and adjacent areas, probably carried by mobile pastoralists (E. F. Henrickson, 1985a). The pervasive Late Chalcolithic decline in the number of villages provides indirect support for the hypothesis of increased diversification and mobility in subsistence strategies. In areas like the Kor river basin, where this decline appears to have been more gradual, many of the remaining sites are adjacent to natural grazing land, suggesting increased reliance on herding even among villagers (Hole, 1987a, pp. 54-55). Some degree of ecological or climatic deterioration may have contributed to this shift in certain areas, and political and economic pressures from the adjacent lowlands may also have increased (Lees and Bates; Bates and Lees; Adams; E. F. Henrickson, 1985a).
Crafts and “trade.” The Chalcolithic era was distinguished from other eras of prehistory by the variety of painted pottery that was produced, most of it utilitarian and probably made in village homes or by part-time potters who did not earn their livelihoods entirely from their craft. With a few notable exceptions, each highland valley system and lowland plain produced a distinctive ceramic assemblage over time; although there was some resemblance to pottery from nearby areas, typically each assemblage was recognizable as the work of a separate community, with different approaches and expectations. Technical and aesthetic quality, though variable, tended to improve over time, culminating in the Bakun painted ware of the Middle Chalcolithic and the Susa A fine ware of the Late Chalcolithic. Both were produced in prosperous and heavily populated areas during phases in which village settlement had reached or just passed its prehistoric zenith and pronounced settlement hierarchies had developed; their demise was associated with the subsequent rapid decline in permanent village settlement. Both were of extremely fine buff fabric without inclusions, skillfully decorated with a variety of standardized geometric patterns in dark paint; each, however, was characterized by a unique “grammar,” “syntax,” and symbolic “semantics” of design (Hole, 1984). It is not yet clear, however, that either or both of these wares resulted from occupational specialization. Archeological evidence for specialized ceramic production in the Persian Chalcolithic is extremely rare. At Tal-e Bakun, the type site for Bakun painted ware, one Middle Chalcolithic residential area of twelve buildings was excavated (Langsdorff and McCown). Several appear to have been potters’ workshops, in which work tables with nearby clay supplies and storage boxes for ash temper were found. In addition, three large kilns were associated with this group of houses (Langsdorff and McCown, pp. 8-15, figs. 2, 4). Hole (1987b, p. 86) has pointed out that the published plans imply that only one of the kilns was in use at any one lime, which suggests specialized production, most likely of Bakun painted ware, perhaps partially for export: The ware was quite widespread in the Kor river basin and adjacent areas of southern Persia. The technical prowess and artistic sophistication involved are arguments for specialized production, possibly involving full-time artisans. From Susa itself there is no direct evidence of specialized ceramic production in the Susa A period, but many of the sites surveyed in Susiana have yielded remains of kilns and many wasters, evidence of widespread localized pottery production in Middle and Late Chalcolithic times. Although some excavated sites have also revealed houses with kilns (e.g., Tepe Bendebal [Band-e Bāll]; Dollfus, 1983), only one is known to have been devoted exclusively to ceramic production: Middle Chalcolithic (Chogha Mish phase) Jaffarabad (Dollfus, 1975). As with Bakun painted ware, however, the exceptionally high technical and aesthetic quality of Susa A fine ware strongly suggests production by full-time specialists at Susa itself and perhaps at other sites as well.
Wide geographic distribution of a distinctive ware or pottery style does not automatically indicate a centralized network of commodity distribution. The absence of efficient transportation in the Chalcolithic, especially in the highlands, must have precluded Systematic, high-volume ceramic exchange, even between the few relatively highly organized centers. For example, in the early Middle Chalcolithic the full Dalma ceramic assemblage, characterized by painted and impressed wares, was remarkably widespread, dominating the Soldūz-Ošnū area of Azerbaijan and the Kangāvar and Nehāvand valleys of northeastern Luristan. The latter ware also occurred in conjunction with Dalma plain red-slipped ware in the Māhīdašt. This distribution pattern was almost certainly not the result of organized long-distance trade in Dalma pottery, which was not a “luxury” ware and was far too heavy and bulky to have been transported economically through the Zagros mountains, especially in the absence of wheeled vehicles and beasts of burden. Furthermore, Dalma settlement data reveal a strictly village economy with no sociopolitical or economic settlement hierarchy. The wide distribution of the pottery must therefore be explained sociologically, rather than economically, as reflecting the distribution of a people, probably a kin-based ethnic group that may have shared a common dialect or religion and produced a distinctive utilitarian pottery, as well as other visible but perishable items of material culture; these items would have served as group markers, analogous to the distinctive dress and rug patterns of today’s Zagros Kurds (E. F. Henrickson and Vitali). Similar situations in the Early Chalcolithic include the spread of Chogha Mami (Čoḡā Māmī) transitional pottery from eastern Mesopotamia into Dehlorān (Hole, 1977) and probably the appearance of J ware in the Māhīdašt (Levine and McDonald). Any pottery “exchange” over a considerable distance was probably a coincidental result of contact for other reasons; late Middle Chalcolithic-Late Chalcolithic black-on-red ware is a good example (E. F. Henrickson, 1985a). In other instances “related” pottery assemblages from adjacent areas are not identical, which implies that, instead of actual movement of vessels, indirect “exchange” took place involving assimilation of selected elements from an external ceramic style into local tradition. One example is the diluted and locally “edited” influence of Ubaid ceramics on otherwise diverse highland Māhīdašt pottery (E. F. Henrickson, 1983; idem, 1986; idem, 1990) in the Middle and Late Chalcolithic. In the eastern central Zagros and adjacent plateau area a different ceramic tradition, labeled Godin VI in the mountains and Sialk (Sīalk) III/6-7 (Ghirshman, 1938) and Ghabristan (Qabrestān) IV (Majidzadeh, 1976; idem, 1977; idem, 1978; idem, 1981) farther east, developed in the Late Chalcolithic. Other archeological evidence suggests that this particular phenomenon may have coincided with an attempt at organizing a regional economic or sociopolitical entity (E. F. Henrickson, forthcoming). The broad distribution of these distinctive ceramics, taken together with glyptic evidence (E. F. Henrickson, 1988) and the remains in several eastern Luristan valleys of large settlements (Goff, 1971), at least one of which permitted the apparently peaceful establishment of a lowland trading enclave in its midst (Weiss and Young), supports an economic explanation.
The special cases of Susa A fine and Bakun painted ware have been discussed above; as true “art” wares, they are probably the best candidates for medium- to long-distance ceramic exchange in Iranian Chalcolithic, but available data are inconclusive, and strictly local production (probably by specialists at a few sites in each area) cannot be ruled out.
There are almost no archeological data for craft production other than ceramics in Chalcolithic Persia.
Only a few widely scattered examples of copper, stone, and glyptic work have been excavated. There are a number of sources for copper (q.v.) in central Persia, but copper processing is known from only one site of this period, Tal-i Iblis (Tal-e Eblīs) near Kermān (Caldwell, 1967; idem and Shahmirzadi). In Iblis I (Early Chalcolithic) and II (late Middle-Late Chalcolithic) hundreds of slag-stained crucible fragments were recovered, along with chunks of slag and rejected copper ore. Although the accompanying ceramics do not reflect outside contact, the presence of large quantities of pyrometallurgical debris and the remote location near copper sources strongly suggest that the site was established specifically to process locally mined copper ore in quantity for export (Caldwell, p. 34). Sialk, from which copper artifacts were recovered in various Chalcolithic levels (Ghirshman, 1938), was also located in a copper-bearing area, near Kāšān; there is no known direct evidence of copper processing at the site, but cast copper tools and ornaments (e.g., round-sectioned pins) were found (Ghirshman, 1938, pl. LXXXIV). In Chalcolithic Giyan V, west of Sialk in northeastern Luristan, copper objects included borers, small spirals, tubes, rectangular-sectioned pins, and a rectangular axe (Contenau and Ghirshman, pp. 16-45, 64ff.). Only a few other sites have yielded copper objects, including the axes from burial hoards at Susa. Copper thus seems to have been a rare and presumably expensive material throughout the Persian Chalcolithic. Direct, unequivocal evidence for other craft production and exchange (e.g., stone, glyptic, and textile work) is either rare or lacking altogether, though scattered small finds from various houses and graves suggest at least a low level of such craft activity in certain areas during certain phases. The exception is obsidian, which was obtained from Anatolian sources in small quantities throughout the Neolithic and Chalcolithic (see Hole, 1987b, pp. 86-87).
Burial practices. Outside the realm of economics and subsistence available archeological data and their interpretation are extremely problematic. The only evidence consists of sparse and unevenly preserved burials and associated structures and goods (for detailed discussion, see Hole, 1987b; idem, 1990). In the Early Chalcolithic all known highland and lowland burials (fewer than a dozen, from three sites: Seh Gabi, Jaffarabad, and Chogha Mish) are of infants or children, who were deposited under the floors of houses, a possible indication of family continuity and settlement stability. As in the Neolithic, grave goods were limited to a few modest personal items, mainly pots and simple jewelry, suggesting a relatively egalitarian society. These data reflect continuation of the predominant Neolithic pattern in southwestern Persia and in lowland Mesopotamia as well. Burying customs for adults are unknown; the burials must have been extramural, but no Early Chalcolithic cemetery has been identified. In the northern and central Zagros the Early Chalcolithic pattern continued to evolve in the next phase. At Dalma Tepe, Seh Gabi, and Kozagaran (Kūzagarān) children were buried under house floors but were first placed in pots or bowls. In contrast, a completely new burial form developed in Ḵūzestān. At Jaffarabad, Chogha Mish, Jowi (Jovī), and Bendebal infants (and a very few adults out of a relatively large sample) have been found in brick tombs outside the houses. Grave goods still consisted of a few simple utilitarian objects, primarily pots, with nothing to indicate differences in status. In the Pošt-e Kūh just north of Dehlorān abundant data have been recovered from almost 200 stone-lined tomb burials, mostly of adults, in the two pastoralist cemeteries, Parchineh and Hakalan. These cemeteries appear to reflect the adoption of lowland burial customs in the outer ranges of the Zagros, lending support to speculation about migration routes between the two areas and interaction between pastoralists and villagers. Grave goods were limited almost entirely to utilitarian ceramics and a few stone tools, weapons, and pieces of jewelry, insufficient to suggest significant differences in status.
The Late Chalcolithic burial sample is very small, except for the large mortuary at Susa. The few known burials were all of children or infants and generally continued the two Middle Chalcolithic patterns: Those from Seh Gabi and Giyan in the central highlands were in jars or pots without burial goods, though architectural context was unclear at both sites. Two infant burials from lowland Jaffarabad were in mat-lined mud “boxes,” accompanied only by pottery and a single seal; it is impossible to interpret this one instance as a status item. Although the large Susa A burial facility appears to have been unique in Chalcolithic Persia, it nevertheless reflected the Middle-Late Chalcolithic lowland custom of burial in brick tombs, demonstrating a formal standardization in the treatment of the dead: one corpse to a tomb, supine in an extended position. Grave goods were much more elaborate than elsewhere, but, with a few striking exceptions (hoards of copper objects), they, too, seem to have been standardized, consisting primarily of ceramics vessels ranging in quality from utilitarian “cooking pots” to distinctive Susa A fine painted goblets (often in the same tombs). The absence of an excavation record for this part of Susa is frustrating, but, even though the size and architectural elaboration of the site are evidence of its function as a regional center, the burials do not seem to reflect a society in which status differences were structurally the most important; rather, an emphasis on the unity of the regional “community” is suggested. It is possible, however, that only individuals or families of high status were buried at Susa and that the majority of those in the economic “sustaining area” were buried elsewhere, probably near their own homes. If so, then the simple fact of burial at the regional center, rather than elaborate individual tombs or grave goods, would have been the primary mark of high status. The rest of the population of Chalcolithic Persia seems to have lived in egalitarian villages or pastoral groups. Larger local settlement centers, involving development of sociopolitical and economic differences in status, were clearly the exception.
R. M. Adams, “The Mesopotamian Social Landscape. A View from the Frontier,” in C. B. Moore, ed., Reconstructing Complex Societies, Cambridge, Mass., 1974, pp. 1-20.
F. Bagherzadeh, ed., Proceedings of the IInd Annual Symposium on Archaeological Research in Iran, Tehran, 1974.
Idem, ed., Proceedings of the IIIrd Annual Symposium on Archaeological Research in Iran, Tehran, 1975.
Idem, ed., Proceedings of the IVth Annual Symposium on Archaeological Research in Iran, Tehran, 1976.
D. G. Bates and S. H. Lees, “The Role of Exchange in Productive Specialization,” American Anthropologist 79/4, 1977, pp. 824-41.
I. A. Brookes, L. D. Levine, and R. Dennell, “Alluvial Sequence in Central West Iran and Implications for Archaeological Survey,” Journal of Field Archaeology 9, 1982, pp. 285-99.
J. R. Caldwell, ed., Investigations at Tall-i Iblis, Illinois State Museum Preliminary Report 9, Springfield, Ill., 1967.
Idem and S. M. Shahmirzadi, Tal-i Iblis. The Kerman Range and the Beginnings of Smelting, Illinois State Museum Preliminary Report 7, Springfield, Ill., 1966.
D. Canal, “La haute terrasse de l’Acropole de Suse,” Paléorient 4, 1978a, pp. 39-46.
Idem, “La terrasse haute de l’Acropole de Suse,” CDAFI 9, 1978b, pp. 11-55.
G. Contenau and R. Ghirshman, Fouilles du Tépé Giyan près de Néhavend, 1931, 1932, Paris, 1935.
P. Delougaz, “The Prehistoric Architecture at Choga Mish,” in The Memorial Volume of the VIth International Congress of Iranian Art and Archaeology, Oxford, 1972, Tehran, 1976, pp. 31-48.
Idem and H. Kantor, “New Evidence for the Prehistoric and Protoliterate Culture Development of Khuzestan,” in The Memorial Volume of the Vth International Congress of Iranian Art and Archaeology, Tehran, 1972, pp. 14-33.
Idem, “The 1973-74 Excavations at Coqa Mis,” in Bagherzadeh, ed., 1975, pp. 93-102.
G. Dollfus, “Les fouilles à Djaffarabad de 1972 à 1974.
Djaffarabad periodes I et II,” CDAFI 5, 1975, pp. 11-220.
Idem, “Djowi et Bendebal. Deux villages de la plaine centrale du Khuzistan (Iran),” CDAFI 13, 1983, pp. 17-275.
J. E. Gautier and G. Lampre, “Fouilles de Moussian,” MDAFP 8, 1905, pp. 59-149.
R. Ghirshman, Fouilles de Sialk près de Kashan, 1933, 1934, 1937 I, Paris, 1938.
C. Goff, New Evidence of Cultural Development in Luristan in the Late 2nd and Early First Millennium, Ph.D. diss., University of London, 1966.
Idem, “Luristan before the Iron Age,” Iran 9, 1971, pp. 131-52.
E. F. Henrickson, Ceramic Styles and Cultural Interaction in the Early and Middle Chalcolithic of the Central Zagros, Iran, Ph.D. diss., University of Toronto, 1983.
Idem, “The Early Development of Pastoralism in the Central Zagros Highlands (Luristan),” Iranica Antiqua 20, 1985a, pp. 1-42.
Idem, “An Updated Chronology of the Early and Middle Chalcolithic of the Central Zagros Highlands, Western Iran,” Iran 23, 1985b, pp. 63-108.
Idem, “Ceramic Evidence for Cultural Interaction between Chalcolithic Mesopotamia and Western Iran,” in W. D. Kingery, ed., Technology and Style. Ceramics and Civilization II, Columbus, Oh., 1986, pp. 87-133.
Idem, “Chalcolithic Seals and Sealings from Seh Gabi, Central Western Iran,” Iranica Antiqua 23, 1988, pp. 1-19.
Idem, “Stylistic Similarity and Cultural Interaction between the ʿUbaid Tradition and the Central Zagros Highlands,” in E. F. Henricksen and I. Thuesen, eds., 1990, pp. 368-402.
Idem, “The Outer Limits. Settlement and Economic Strategies in the Zagros Highlands during the Uruk Era,” in G. Stein and M. Rothman, eds., Chiefdoms and Early States in the Near East. The Organizational Dynamics of Complexity, Albuquerque, forthcoming.
Idem and I. Thuesen, eds., Upon This Foundation. The ʿUbaid Reconsidered, Copenhagen, Carsten Niebuhr Institute Publication 8, 1990.
Idem and V. Vitali, “The Dalma Tradition. Prehistoric Interregional Cultural Integration in Highland Western Iran,” Paléorient 13/2, 1987, pp. 37-46.
R. C. Henrickson, Godin III, Godin Tepe, and Central Western Iran, Ph.D. diss., University of Toronto, 1984.
F. Hole, Studies in the Archaeological History of the Deh Luran Plain. The Excavation of Chogha Sefid, The University of Michigan Museum of Anthropology Memoirs 9, Ann Arbor, Mich., 1977.
Idem, “Symbols of Religion and Social Organization at Susa,” in L. Braidwood et al., eds., The Hilly Flanks and Beyond. Essays on the Prehistory of Southwestern Asia, The University of Chicago Oriental Institute Studies in Ancient Oriental Civilization 36, Chicago, 1983, pp. 233-84.
Idem, “Analysis of Structure and Design in Prehistoric Ceramics,” World Archaeology, 15/3, 1984, pp. 326-47.
Idem, “Archaeology of the Village Period,” in F. Hole, ed., 1987a, pp. 29-78.
Idem, “Settlement and Society in the Village Period,” in F. Hole, ed., 1987b, pp. 79-106.
Idem, “Patterns of Burial in the Fifth Millennium,” in E. F. Henricksen and I. Thuesen, eds. (forthcoming).
Idem, ed., The Archaeology of Western Iran. Settlement and Society from Prehistory to the Islamic Conquest, Washington, D.C., 1987.
F. Hole, K. V. Flannery, and J. A. Neely, Prehistory and Human Ecology of the Deh Luran Plain, The University of Michigan Museum of Anthropology Memoirs 1, Ann Arbor, Mich., 1969.
H. Kantor, “The Excavations at Coqa Mish, 1974-75,” in Bagherzadeh, ed., 1976a, pp. 23-41.
Idem, “Prehistoric Cultures at Choga Mish and Boneh Fazili (Khuzistan),” in Memorial Volume of the VIth International Congress on Iranian Art and Archaeology, Oxford, 1972, Tehran, 1976b, pp. 177-94.
A. Langsdorff and D. E. McCown, Tal-i Bakun A, The University of Chicago Oriental Institute Publications 59, Chicago, 1942.
S. H. Lees and D. G. Bates, “The Origins of Specialized Pastoralism. A Systemic Model,” American Antiquity 39, 1974, pp. 187-93.
L. D. Levine, “Archaeological Investigations in the Mahidasht, Western Iran, 1975,” Paléorient 2/2, 1974, pp. 487-90.
Idem, “Survey in the Province of Kermanshahan 1975.
Mahidasht in the Prehistoric and Early Historic Periods,” in Bagherzadeh, ed., 1976, pp. 284-97.
Idem and M. M. A. McDonald, “The Neolithic and Chalcolithic Periods in the Mahidasht,” Iran 15, 1977, pp. 39-50.
L. D. Levine and T. C. Young, Jr., “A Summary of the Ceramic Assemblages of the Central Western Zagros from the Middle Neolithic to the Late Third Millennium B.C.,” in J. L. Huot, ed., Préhistoire de la Mésopotamie. La Mésopotamie préhistorique et l’exploration récente du Djebel Hamrin, Paris, 1987, pp. 15-53.
M. M. A. McDonald, An Examination of Mid-Holocene Settlement Patterns in the Central Zagros Region of Western Iran, Ph.D. diss., University of Toronto, 1979.
Y. Majidzadeh, The Early Prehistoric Cultures of the Central Plateau of Iran. An Archaeological History of Its Development during the Fifth and Fourth Millennia B.C., Ph.D. diss., The University of Chicago, 1976.
Idem, “Excavations in Tepe Ghabristan. The First Two Seasons, 1970 and 1971,” Marlik 2, 1977, pp. 45-61.
Idem, “Corrections of the Chronology for the Sialk III Period on the Basis of the Pottery Sequence at Tepe Ghabristan,” Iran 16, 1978, pp. 93-101.
Idem, “Sialk III and the Pottery Sequence at Tepe Ghabristan,” Iran 19, 1981, pp. 141-46.
R. de Mecquenem, “Fouilles préhistoriques en Asie occidentale. 1931-1934,” l’Anthropologie 45, 1935, pp. 93-104.
J. de Morgan, “Observations sur les couches profondes de l’Acropole de Suse,” MDP 13, 1912, pp. 1-25.
P. Mortensen, “A Survey of Prehistoric Settlements in Northern Luristan,” Acta Archaeologica 45, 1974, pp. 1-47.
Idem, “Chalcolithic Settlements in the Holailan Valley,” in Bagherzadeh, ed.,1976, pp. 42-62.
S. Pollock, “Power Politics in the Susa A Period,” in E. F. Henricksen and I. Thuesen, eds. (forthcoming).
M. E. Prickett, “Tepe Yahya Project. Upper Rud-i Gushk Survey,” Iran 14, 1976, pp. 175-76.
Idem, Man, Land, and Water. Settlement Distribution and the Development of Irrigation Agriculture in the Upper Rud-i Gushk Drainage, Southeastern Iran, Ph.D. diss., Harvard University, 1986.
M. J. Steve and H. Gasche, L’Acropole de Suse, MDAFI 46, 1971.
W. Sumner, Cultural Development in the Kur River Basin, Iran. An Archaeological Analysis of Settlement Patterns, Ph.D. diss., University of Pennsylvania, Philadelphia, 1972.
Idem, “Early Settlements in Fars Province, Iran,” in L. D. Levine and T. C. Young, Jr., eds., Mountains and Lowlands. Essays in the Archaeology of Greater Mesopotamia, Malibu, Calif., 1977, pp. 291-305.
S. Swiny, “Survey in Northwest Iran, 1971,” East and West 25/1-2, 1975, pp. 77-96.
L. Vanden Berghe, “Excavations in Luristan. Kalleh Nissar,” Bulletin of the Asia Institute of Pahlavi University 3, 1973a, pp. 25-56.
Idem, “Le Luristan avant l’Age du Bronze. Le nécropole du Hakalan,” Archaeologia 57, 1973b, pp. 49-58.
Idem, “Le Lorestan avant l’Age du Bronze. La nécropole de Hakalan,” in Bagherzadeh, ed., 1974, pp. 66-79.
Idem, “Fouilles au Lorestan, la nécropole de Dum Gar Parchineh,” in Bagherzadeh, 1975a, pp. 45-62.
Idem, “La nécropole de Dum Gar Parchinah,” Archaeologia 79, 1975b, pp. 46-61. Idem, Mission
Archéologique dons le Pusht-i Kuh, Luristan. IXe Campagne 1973. La nécropole de Dum Gar Parchinah (Rapport préliminaire), 2 vols., forthcoming.
M. Voigt, “Relative and Absolute Chronologies for Iran between 6500 and 3500 cal. B. C.,” in O. Aurenche, J. Evin, and F. Hours, eds., Chronologies in the Near East. Relative Chronologies and Absolute Chronology. 16,000-4,000 B.P., British Archaeological Reports International Series 379, Oxford, 1987, pp. 615-46.
Idem and R. H. Dyson, Jr., “The Chronology of Iran, ca. 8000-2000 B.C.,” in R. W. Ehrich, ed., Chronologies in Old World Archaeology, Chicago, forthcoming.
H. Weiss and T. C. Young, Jr., “The Merchants of Susa. Godin V and Plateau-Lowland Relations in the Late Fourth Millennium B.C.,” Iran 13, 1975, pp. 1-18.
H. T. Wright, An Early Town on the Deh Luran Plain. Excavations at Tepe Farukhabad, The University of Michigan Museum of Anthropology Memoirs 13, Ann Arbor, Mich., 1981.
Idem, “The Susiana Hinterlands during the Era of Primary State Formation,” in F. Hole, ed., 1987, pp. 141-56.
Idem et al., “Early Fourth Millennium Developments in Southwestern Iran,” Iran 13, 1975, pp. 129-48.
T. C. Young, Jr., Excavations at Godin Tepe, Royal Ontario Museum Art and Archaeology Occasional Papers 17, Toronto, 1969.
Idem, “An Archaeological Survey in Kangavar Valley,” in Bagherzadeh, ed., 1975, pp. 23-30.
Idem and L. D. Levine, Excavations at the Godin Project. Second Progress Report, Royal Ontario Museum Art and Archaeology Occasional Papers 26, Toronto, 1974.
A. Zagarell, The Prehistory of the Northeast Baḫtiyari Mountains, Iran, TAVO, Beihefte B42, Wiesbaden, 1982.
(Elizabeth F. Henrickson)
Originally Published: December 15, 1991
Last Updated: October 13, 2011
This article is available in print.
Vol. V, Fasc. 4, pp. 347-353 | http://www.iranicaonline.org/articles/chalcolithic-era-in-persia | 13 |
16 | distribution of wealth and incomeArticle Free Pass
distribution of wealth and income, the way in which the wealth and income of a nation are divided among its population, or the way in which the wealth and income of the world are divided among nations. Such patterns of distribution are discerned and studied by various statistical means, all of which are based on data of varying degrees of reliability.
Wealth is an accumulated store of possessions and financial claims. It may be given a monetary value if prices can be determined for each of the possessions; this process can be difficult when the possessions are such that they are not likely to be offered for sale. Income is a net total of the flow of payments received in a given time period. Some countries collect statistics on wealth from legally required evaluations of the estates of deceased persons, which may or may not be indicative of what is possessed by the living. In many countries, annual tax statements that measure income provide more or less reliable information. Differences in definitions of income—whether, for example, income should include payments that are transfers rather than the result of productive activity, or capital gains or losses that change the value of an individual’s wealth—make comparisons difficult.
In order to classify patterns of national wealth and income, a basis of classification must be determined. One classification system categorizes wealth and income on the basis of the ownership of factors of production: labour, land, capital, and, occasionally, entrepreneurship, whose respective forms of income are labeled wages, rent, interest, and profit. Personal distribution statistics, usually developed from tax reports, categorize wealth and income on a per capita basis.
Gross national income (GNI) per capita provides a rough measure of annual national income per person in different countries. Countries that have a sizable modern industrial sector have a much higher GNI per capita than countries that are less developed. In the early 21st century, for example, the World Bank estimated that the per-capita GNI was approximately $10,000 and above for the most-developed countries but was less than $825 for the least-developed countries. Income also varies greatly within countries. In a high-income country such as the United States, there is considerable variation among industries, regions, rural and urban areas, females and males, and ethnic groups. While the bulk of the U.S. population has a middle income that is derived largely from earnings, wages vary considerably depending on occupation. (See also gross national product, gross domestic product.)
A significant proportion of an economy’s higher incomes will derive from investment rather than earnings. It is often the case that the higher the income, the higher the investment-derived portion tends to be. Because most fortunes require long periods to accumulate, the existence of a class of very wealthy persons can result from the ability of those persons to retain their fortunes and pass them on to descendants. Earned incomes are influenced by a different kind of inheritance. Access to well-paid jobs and social status is largely the product of education and opportunity. Typically, therefore, well-educated children of wealthier parents tend to retain their parents’ status and earning power. A dynamic economy, however, increases the likelihood of attaining wealth and status through individual effort alone.
What made you want to look up "distribution of wealth and income"? Please share what surprised you most... | http://www.britannica.com/EBchecked/topic/638235/distribution-of-wealth-and-income | 13 |
42 | US History/War, Nationalism, and Division
The War of 1812
Precursors to the War
By the time James Madison took office as president in 1809, the U.S. was still a young nation. Though the war for independence was fought and won, culminating in the Treaty of Paris in 1783, problems revolving around U.S. sovereignty continued to be a source of contention between the United States and Great Britain. By 1812, the U.S military Academy at West Point, founded in 1802, had produced only eighty-nine regular officers. Senior army officers were aged Revolutionary War Veterans or political appointees. Nor did the United States succeed at mustering sufficient forces. The governments efforts to lure recruits, with sign up- bonuses and promises of three months pay and rights to purchase 160 acres of western land upon discharge, met with mixed success. This was especially true on the American frontier (remember, the British had agreed to recognize all of the land from the Atlantic Ocean to the Mississippi River, except for Spanish Florida) and on the high seas, where American sailors were pressed into service in the British Royal Navy, as the British were waging war against Napoleonic France. The reason for doing this was so the British could find and recover seamen who had defected from the British Navy to join (a relatively easier) life on the High Seas with the Americans. The British would raid American ships (such as the Chesapeake) claiming to look for British deserters. When American refused to allow the British to seize ships this resulted in 18 Americans wounded. This upset a lot of Americans who pressed Jefferson by anonymous letter for the war with Britain. Further, the British had recruited Indians, such as Tecumseh, to aggravate American settlers and even continued to maintain forts on American soil. The British encouraged Native American tribes to harass American settlers. The British took interest in the Ohio Valley and Kentucky region due to the fur trade with the western world.
The British further enraged the Americans with their refusal to recognize U.S. neutrality in Britain's war with France. The British did not want the United States to engage in trade with France, even though Americans believed that they had the right to trade with whomever they wished.
In addition, many Americans wanted to push the British Empire off of the North American continent altogether. President Madison and his advisers believed a conquest of Canada would be quick and easy, believing that the British would hand the Americans the land because of their war with Napoleon. Former President Thomas Jefferson himself even stated that "the acquisition of Canada this year, as far as the neighborhood of Quebec, will be a mere matter of marching, and will give us the experience for the attack on Halifax, the next and final expulsion of England from the American continent.
Politics of the War
As was stated above, former President Jefferson and current President Madison, both Democratic-Republicans, supported the war to end British aggravation on both the frontier and the high seas, with the hope of taking over Canada from the British. However, New England Federalists opposed the war, which was driven by Southern and Western desires for more land. The war was highly unpopular in New England because the New England economy relied heavily on trade, especially with Great Britain.
A Declaration of War was passed by Congress by an extremely small margin in the summer of 1812. Across the Atlantic, meanwhile, Prime Minister Spencer Perceval had been shot and killed, putting Lord Liverpool, who wanted to improve relations with the United States, in charge of the government. He repealed the orders of impressment, but by then, it was already too late. The war had begun.
War of 1812
The war of 1812 did not begin badly for the Federalists, who benefited from anti-war sentiment. They joined renegade Democratic Republicans in supporting New York City mayor Dewitt Clinton for president in the election of 1812. Clinton lost to President Madison by 128 to 89 votes--a respectable showing against a wartime president and the federalists gained some congressional seats and carried many local elections. But the south and the west areas that favored the war remained solidly Democratic Republican. Both sides were rather unprepared to wage a war. The British did not have many troops in British North America at the time (some 5,000 or so), and meanwhile the British war against Napoleon continued in continental Europe as the British blockaded most of the European coastline.
The American military was still unorganized and undisciplined compared to the British military. Militias in New England and New York often refused to fight outside their own states. Desperate for soldiers, New York offered them to free the slaves who enlisted, and compensation to their owners, and the U.S Army made the same offer to slaves in the Old Northwest and in Canada. In Philadelphia black leaders formed a black brigade to defend the city, but in the deep south fear of arming slaves kept them out of the military in New Orleans, where a free black militia dated back to Spanish control of Louisiana. The British on the other hand recruited slave by promising freedom, and exchange for service. The regular army consisted of around 12,000 men, but the state militias were often unwilling to fight outside state lines (and often retreated when they did). This, combined with some difficult losses early on and the war's high level of unpopularity in New England made the war effort much more difficult than President Madison originally imagined.
The Atlantic Theater
The British navy was by far the preeminent naval force in the world. They dominated the high seas. By contrast, the U.S. Navy was not even 20 years old yet and had a mere 22 vessels. The British plan was to protect its shipping in Canada while blockading major American ports.
However, there were a series of American naval victories on the Atlantic at this early stage of the war. On August 19, the USS Constitution engaged HMS Guerriere. The battle was held off the coast of Nova Scotia became the first naval encounter. The HMS Guerriere was led by Captain Dacres who states who was really confident that the British navy could take the U. S Constitution, "There is a Yankee frigate in 45 minutes she surely ours take her in 15 minutes and I promise you 4 months pay." After being 25 ft in distance USS Constitution opened fire with cannon and grape shots. In the midst of the battle, a cannonball fired from Guerriere hit the Constitution's in the side, causing one American seaman to exclaim "Huzzah! Her sides are made of iron!" The Guerriere, which had been instrumental in enforcing the British blockade, lost decisively. Her crew was brought on board as prisoners. When it was realized that Guerriere could not be salvaged, it was set fire and blown up. When Captain Hull of the USS Constitution reached Boston with the news, joy broke out. In October of the same year, Constitution sailed under Captain William Bainbridge and won another victory off the coast of Brazil against HMS Java, which was also rendered unsalvageable while the Constitution remained unharmed. The USS Constitution won the nickname "Old Ironsides" in some of the first victories against Great Britain on the high seas. The victory led from General Hull sparked new hope to the Americans and also redeemed them from the lost at the battle at Fort Dearborn, Ohio leaving General Hull wounded and force to surrender August 15, 1812.
Captain Stephen Decatur, who gained fame during the Barbary War, was also responsible for early naval victories. On October 25, 1812, Captain Decatur, commanding the USS United States, captured the HMS Macedonian. And in January 1813, Captain David Porter sailed the USS Essex into the Pacific to aggravate British shipping in retaliation for harassment of British whaling ships on the American whaling industry. Essex inflicted some $3 million in damages to the British whaling industry before finally being captured off the coast of Chile on March 28, 1814.
Back in on the Atlantic Coast, meanwhile, Sir John Coape Sherbrooke embarked on what was known as the Penobscot Expedition in September 1814 with 500 British sailors off the coast of Maine (then part of Massachusetts), a main hub for smuggling between the British and Americans. During this period, lasting 26 days, Sir Sherbrooke raided and looted several cities and destroyed 17 American vessels, won the Battle of Hampden and occupied Castine for the remainder of the war.
The Great Lakes/Canadian/Western Theater
The Western theater of the war was mostly fought in the Michigan, Ohio, and the Canadian border area. Geography dictated the military operations that would take place in west. Primarily around Lake Erie, Niagara River, Lake Ontario, Saint Lawrence River, and Lake Champlain.
Chesapeake Campaign
The Chesapeake Bay Region was a center of trade, commerce and government during the eighteenth and nineteenth centuries. It became a target of British Military strategy during the War of 1812.The British brought war into the Chesapeake area in 1813 and 1814. On July 4, 1813 Joshua Barney convinced the Navy Department to build twenty barges to protect the Chesapeake Bay. These barges were successful at harassing the Royal Navy, but in the end proved useless in the British campaign that led to "Burning of Washington". The White House and other structures were left ablaze all night and the President and his Cabinet fled D.C. This attack was a diversion by the British and the major battle would take place in 1814 in Baltimore. . This battle is where Francis Scott Key who was detained on a British ship watching the bombardment of Fort McHenry, aboard the British ship Key wrote the verses to The Star Spangled Banner the next morning, this song would become the National Anthem in 1931. What the Chesapeake campaign did was make the Americans realize that they were not a global super power and now they were losing a war because of their arrogance.Despite some victories on the Atlantic by the USS Constitution, USS Wasp, and USS United States of the U.S. Navy could not match the powerful British Royal Navy. The British blockaded nearly every American port on the Atlantic and Gulf coasts. The British had America so blockaded that that U.S. trade declined to nearly 90% in 1811. This major loss of funds threatened to bankrupt the federal government and cut off New England from British Embargo.
The Southern Theater
Connected to the War of 1812 was the Creek War in the South. The Creeks were supported by the British, and in March 1814, General Andrew Jackson and General John Coffee led a force comprised of about 2,000 Tennessee militiamen, Choctaw, Cherokee, and U.S. regulars in a war against the Creek Indians. Out of 1,000 Creeks, led by Chief Menawa, 800 were killed at the Battle of Horseshoe Bend. Only 49 of Jackson's forces were killed. Jackson pursued the remaining Creeks until they surrendered.
At the end of the year 1814, General Jackson was on the move again, this time to New Orleans, Louisiana, to defend against invading British forces. In one of the greatest battles of the war, Jackson decisively routed the British forces. The British army took a hit of 1,784 killed; the Americans lost merely 210. The British forces left New Orleans, and the battle propelled General Jackson to hero status, despite the fact that the war was over. Word had not yet reached the combatant forces that a peace had been signed.
Hartford Convention
New England merchants and shippers had already been upset about the trade policies of the Jefferson administration (Embargo Act of 1807) and the Madison administration (Non-Intercourse Act of 1809), and had wholly opposed going to war with Great Britain in the first place due to the potential damage to New England industry. Thus, the Federalist Party, which had been weakened at the end of the Adams administration, found resurgence in popularity among the citizens of New England states.
With trade illegalized and a British blockade, New England states, particularly Massachusetts and Connecticut, felt the brunt of President Madison's war-time policies. This includes what many New Englanders may have perceived as an attack on their states' sovereignty, as Madison maintained executive control over the military defense of New England rather than allowing governors to take control.
On October 10, 1814, the Massachusetts legislature voted for delegates from all five New England states to meet on December 15 in Hartford, Connecticut, to discuss constitutional amendments pertaining to the interests of New England states.
Twenty-six delegates gathered in Hartford. The meetings were held in secret and no records were kept. The Hartford Convention concluded with a report stating that states had a duty and responsibility to assert their sovereignty over encroaching and unconstitutional federal policy. In addition, a set of proposed Constitutional amendments was established, including:
- Prohibition of trade embargos lasting longer than 60 days;
- 2/3rds majority in Congress for declaration of offensive war, admission of new states, and interdiction of foreign commerce;
- Rescinding 3/5ths representation of slaves (perceived as an advantage to the South);
- One-term limit for the President of the United States; and
- A requirement that each succeeding president be from a different state than his predecessor.
While some delegates may have desired secession from the Union, no such proposal was adopted by the Convention.
Three commissioners from Massachusetts were sent to Washington, DC, to negotiate these terms in February 1815, but news that the war had ended and of General Jackson's victory at New Orleans preceded them. The act was perceived by many as disloyal, and the commissioners returned to Massachusetts. The Hartford Convention added to the ultimate decline of the Federalist Party.
Second Barbary War
Following the First Barbary War, the United States focused on the situation developing with Great Britain, giving the pirate states of the Barbary Coast opportunity to not follow the terms of the treaty ending that war. The U.S., not having the military resources to devote to the region, was forced to pay ransoms for the crew. The British expulsion of all U.S. vessels from the Mediterranean during the War of 1812 further emboldened the pirate states, and Umar ben Muhammad, the Dey of Algiers, expelled U.S. Consular Tobias Lear, declaring war on the United States for failing to pay tribute. Again, the situation went unaddressed due to the lack of U.S. military resources in the area.
After the end of the War of 1812, however, the U.S. was able to focus on American interests in North Africa. On March 3, 1815, Congress authorized use of naval force against Algiers, and a force of ten ships was deployed under the commands of Commodores Stephen Decatur, Jr. and William Bainbridge. Decatur's squadron was the first to depart to the Mediterranean on May 20.
Commodore Decatur quickly led the squadron to decisive victories over the Algiers, capturing two Algerian-flagged ships en route to Algiers. By the end of the month of June, Decatur reached Algiers and demanded compensation or threatened the Dey's destruction. The Dey capitulated, and a treaty was signed in which the Algerian ships were returned in exchange for American captors (of which there were approximately ten), several Algerian captors were returned in exchange for several European captors, $10,000 was paid for seized shipping, and guarantees were made to end the tribute payments and grant the United States full shipping rights.
James Monroe Presidency and The Era of Good Feelings
Opposition to the War of 1812 and the Hartford Convention terminally damaged the Federalists as a viable political party, even portraying the party as traitorous. The last serious Federalist candidate Rufus King ran for the presidency in 1816, losing to James Madison's Secretary of State James Monroe. The party disbanded in 1825.
Indeed following the war, a new wave of nationalism spread across the United States. Previously, citizens of the United States tended to view themselves as citizens of their individual states (i.e. New Yorkers or Georgians) before they viewed themselves as Americans.
The wave of national pride and the lull in partisanship in the wake of defeating the British Empire led to what journalist for Boston's Columbian Sentinal Benjamin Russell perceived to be an Era of Good Feelings as the newly elected President Monroe came through on a good will tour in 1817.
American System
Riding on the wave of newfound national pride, politicians such as Henry Clay of Kentucky, John C. Calhoun of South Carolina, and John Q. Adams of Massachusetts, following in Alexander Hamilton's footsteps, pushed an agenda to strengthen and unify the nation. The system, which came to be known as the American System, called for high tariffs to protect American industry and high land prices to generate additional federal revenue. The plan also called for strengthening the nation's infrastructure, such as roads and canals, which would be financed by tariffs and land revenue. The improvements would make trade easier and faster. Finally, the plan called for maintaining the Second Bank of the United States (chartered in 1816 for 20 years) to stabilize the currency and the banking system, as well as the issuance of sovereign credit. Congress also passed a protective tariff to aid industries that had flourished during the war of 1812 but were now threatened by the resumption of over seas trade. The Tariff of 1816 levied taxes on imported woolens and cottons, as well as on iron,leather,hats, papers,and sugar.
Although portions of the system were adopted (for example, 20-25% taxes on foreign goods, which encouraged consumption of relatively cheaper American goods), others met with roadblocks. Namely, this was true of the infrastructure proposals. The Constitutionality was called into question on whether or not the federal government had such power. Despite this, two major infrastructure achievements were made in the form of the Cumberland Road and the Erie Canal. The Cumberland Road stretched between Baltimore and the Ohio River, facilitating ease of travel and providing a gateway to the West for settlement. The Erie Canal extended from the Hudson River at Albany, New York, to Buffalo, New York, at Lake Erie, thus vastly improving the speed and efficiency of water travel in the northeast.
Opposition to the American System mostly came from the West and the South. Clay argued, however, that the West should support the plan because urban workers in the northeast would be consumers of Western food, and the South should support it because of the market for the manufacture of cotton in northeastern factories. The South, however, strongly opposed tariffs and had a strong market for cotton, anyway.
In short, the American System met with mixed results over the 1810s and 1820s due to various obstacles, but in the end, American industry benefited, and growth ensued.
Industrial Revolution
The Industrial Revolution was from the 18th to 19th century which made major changes in agriculture, manufacturing, mining, transportation, and technology. The industrial revolution began in England and slowly made its way over to the Americas.
Panic of 1819
Following the War of 1812, in addition to the relative absence of partisanship, the United States experienced a period of economic growth. However, around the same time the partisanship returned to Washington, the U.S. economy began to experience its first major financial crisis. Unlike the downturns of the 1780s and 1790s, this downturn originated primarily in the United States, and caused foreclosures, bank failures, unemployment, and reductions in agricultural and manufacturing output.
Adams-Onis Treaty of 1819
Due to the act of purchasing the Louisiana territory in 1803, the Adams-Onis Treaty in 1819 (purchasing Florida territory), and the incorporation of the northern territories of Mexico into the United States in 1847 (Mexican Cession), the number of Catholics in the United States nearly doubled.
Monroe Doctrine and Foreign Affairs
On December 2, 1823, President Monroe introduced the most famous aspect of his foreign policy in his State of the Union Address to Congress. The Monroe Doctrine, as it came to be called, stated that any further attempts by European powers to interfere in the affairs of the nations of the Western hemisphere (namely Latin America) would be seen as an act of aggression against the United States, requiring a U.S. response. The Monroe Doctrine came about as a result of U.S. and British fears the Spain would attempt to restore its power over former colonies in Latin America. President Monroe essentially sent notice that America, both North and South, was no longer open to colonization by European powers.
The fact that the U.S. was still a young nation with very little naval power meant that the warning went largely ignored by the major powers. Despite this, the British approved of the policy and largely enforced it as part of the Pax Britannica, whereby the British Navy secured the neutrality of the high seas. It was mainly through this support, rather than the Monroe Doctrine exclusively, which secured and maintained the sovereignty of Latin American nations.
Even so, the Monroe Doctrine was met with praise by Latin American leaders, despite the fact that they knew that the United States realistically could not enforce it without the backing of the British. In 1826, Latin American revolutionary hero Simón Bolívar called for the first Pan-American conference in Panama, and an era of Pan-American relations commenced.
Seminole War
Chief Neamathla of the Mikasuki at Fowltown engaged in a land dispute with the commander at Fort Scott, General Edmund Pendleton Gaines. The land had been ceded by the Creek at the Treaty of Fort Jackson, however the Mikasuki did not consider themselves Creek and so wished to exert sovereignty over the area, believing the Creek did not have right to cede Mikasuki land. In November 1817, a force of 250 men was sent by General Gaines to capture Neamathla, but was driven back. A second attempt in the same month turned out successful, and the Mikasuki people were driven from Fowltown.
A week after the attack on Fowltown, a military boat transporting supplies, sick soldiers, and the families of soldiers (whether or not children were on board is not clear) to Fort Scott was attacked on the Apalachicola River. Most of the passengers on board were killed, with one woman captured and six survivors making it to Fort Scott.
General Gaines had been ordered not to invade Spanish Florida (save for small incursions). However, after word of the Scott massacre reached Washington, DC, Gaines was ordered to invade Florida in pursuit of Seminoles, but not to attack Spanish installations. However, Gaines had been ordered to eastern Florida to deal with piracy issues there, so Secretary of War John C. Calhoun ordered General Andrew Jackson, hero of the War of 1812, to lead the invasion.
General Jackson gathered his forces at Fort Scott in March 1818. The force consisted of 800 regulars, 1,000 Tennessee volunteers, 1,000 Georgia militia, and 1,400 friendly Creek warriors. Jackson's force entered Florida on March 13, following the Apalachicola River and constructing Fort Gadsden. The Indian town of Tallahassee was burned on March 31 and the town of Miccosukee was taken the next day. The American and Creek forces left 300 Indian homes devastated in their wake, reaching the Spanish fort of St. Marks on April 6, capturing it.
The American force left St. Marks and continued to attack Indian villages, capturing Alexander George Arbuthnot, a Scottish trader who worked out of the Bahamas and supplied the Indians, and Robert Ambrister, a former Royal Marine and self-appointed British agent, as well as the Indian leaders Josiah Francis and Homathlemico. All four were eventually executed. Jackson's forces also attacked villages occupied by runaway slaves along the Suwannee River.
Having declared victory, Jackson sent the Georgia militia and Creek warriors home, sending the remaining army back to St. Marks, where he left a garrison before returning to Fort Gadsden. On May 7, he marched a force of 1,000 to Pensacola where he believed the Indians were gathering and being supplied by the Spanish, against the protests of the governor of West Florida, who insisted that the Indians there were mostly women and children. When Jackson reached Pensacola on May 23, the governor and the Spanish garrison retreated to Fort Barrancas. After a day of exchanging cannon fire, the Spanish surrendered, and Colonel William King was named military governor of West Florida. General Jackson went home to Tennessee -- and prepared for his presidential run in 1824.
The 1824 Election and Presidency of John Q. Adams
With the dissolution of the Federalist Party, there were no organized political parties for the 1824 presidential election, and four Democratic-Republicans vied for the office. The Tennessee legislature and a convention of Pennsylvania Democratic-Republicans had nominated General-turned-Senator Andrew Jackson for president in 1822 and 1824, respectively. The Congressional Democratic-Republican caucus (the traditional way to nominate a president) selected Treasury Secretary William H. Crawford for president and Albert Gallatin for vice president. Secretary of State John Q. Adams, son of the former President Adams, and House Speaker Henry Clay also joined the contest. It is widely believed that Crawford would have won had he not suffered a debilitating stroke during the course of the election.
When the electoral votes were cast and counted, it turned out that no candidate had a majority of votes. Jackson had won the most votes, but Constitutionally, a plurality was not good enough, and the vote for the top three candidates went to the House of Representatives. Clay, with the least amount of votes, was ineligible, but still wielded a lot of power as speaker of the house. And since Clay had a personal dislike of Jackson and supported many of Adams' policies, which were similar to his American System, Clay threw his support to Adams, and Adams won the presidency, much to the chagrin of Jackson, who had won the most electoral and popular votes. After Adams appointed Clay as secretary of state, Jackson's supporters protested that a corrupt bargain had been struck.
The 1824 helped in the resurgence of political parties in America. Jackson's followers, members of the Democratic Party, were known as Jacksonians; Adams, Clay, and their supporters established the National Republican Party. Partisan politics was back in style in Washington, DC.
During Adams' term as president, he undertook an ambitious domestic agenda, implementing many aspects of the American System, such as extending the Cumberland Road and several canal projects like the Chesapeake and Ohio Canal, the Delaware and Chesapeake Canal, the Portland to Louisville Canal, the connection of the Great Lakes to the Ohio River system, and the enlargement and rebuilding of the Dismal Swamp Canal in North Carolina. He worked diligently to upgrade and modernize infrastructure and internal improvements, such as roads, canals, a national university, and an astronomical observatory. These internal improvements would be funded by tariffs, an issue which divided the Adams administration. While Secretary Clay most certainly supported tariffs, Vice President John C. Calhoun opposed. This turned out to be a source of contention within the administration.
Unfortunately for President Adams, his agenda met with many roadblocks. First of all, Adams' ideas were not very popular, even from within his own party. But a major reason Adams had a tough time enacting his agenda was because the Jacksonians were still quite upset about the 1824 elections. In 1827, the Jacksonians won control of Congress, making it even more difficult. In addition, Adams did not believe in removing administration officials from office, except for incompetence, including those who may be political opponents. As a result, many administration officials were, in fact, supporters of Andrew Jackson. Adams' generous policy towards Indians further served to not endear him to some, such as when the federal government sought to assert authority on behalf of the Cherokee, causing Georgia to take up arms. The final nail in the coffin of the Adams administration would turn out to be when President Adams signed the Tariff of 1828 into law, which intended to protect northern industry, while the South saw it as an economic blow. The "Tariff of Abominations," as it was called, was highly unpopular in the South, and virtually crippled the administration in its final year.
The campaign was brutal, bitter, and personal, with even Jackson's wife attacked, accused of bigamy. In the end, Adams lost handily: 178-83 in the electoral college. Adams, like his father, chose not to attend his successor's inauguration ceremony. In 1830, he would go on to be the first former president elected to Congress after serving as president.
The People's President -- The Era of Andrew Jackson
Election and Inauguration
The three week journey from Nashville, Tennessee, to Washington, DC, was filled with jubilation, as crowds swarmed to catch a glimpse of the new president-elect Andrew Jackson. The inauguration ceremonies of former presidents were all indoor affairs, invite only. On March 29, 1829, however, there was a sense that this new president was a man of the people. The ceremony was held on the East Portico of the U.S. Capitol, where 21,000 people eventually gathered to view the swearing-in.
The new president left through the west front of the capital and proceeded to the executive mansion for the reception on a white horse. By the time he arrived, the White House had already been invaded by supporters, as the festivities had been opened to the public. Supreme Court Justice Joseph Story noted, "I never saw such a mixture. The reign of King Mob seemed triumphant."
The new president was forced to sneak out of the White House before heading to Alexandria, Virginia. The crowd remained, however, until the liquor was moved to the front lawn. The White House was left a mess, including thousands of dollars in broken china.
Petticoat Affair and the Kitchen Cabinet
The Petticoat Affair is also known as the Eaton Affair. Happening in the U.S. between 1830-1831. It was a U.S. scandal involving President Andrew Jackson's cabinet and their wives. Even though this was a private matter, it still troubled several men in their political careers. The Petticoat affair involved Peggy Eaton who was accused of having an affair with a man by the name of John Eaton during the time she was married to purser John Timberlake. Daughter of William O Neal, Peggy remained close to politics her father owned the Washington D.C Boarding House for politicians where Peggy worked. Peggy frequented the boarding house which later gave spectators more discrepancies in Peggy's character as she looses popularity. Peggy husband died while on sea and many believed it was a suicide after being revealed to of his wife Peggy's affair with John Eaton who was a good friend of couple. Although Timberlake's death was said to be a result of pneumonia. Peggy married John Eaton less than a year after her husbands death. Many surrounding women felt like the marriage of Peggy and John Eaton was not the correct thing to do. The alleged affair controversy ultimately assisted many men in Andrew Jackson's cabinet to resign from their position, including John Eaton himself. People begin to judge Jackson on the his position on the marriage. Andrew Jackson recommended that John Eaton and Peggy should get married, Jackson views resulted on from his personal experience he had with his first wife.A group of women emerged claimed to be Anti-Peggy who was led by Floride Calhoun. The women who emerged proclaimed rights and guidelines that women have to follow after death of husband including that they mourn and wear black for a year following their death.
Nullification Crisis
One of the early crises faced by the Jackson administration was the issue of nullification. In 1828, Congress decided to raise an already high tariff on imports from Europe. It was meant to help the industrialized North compete with Europe, but the agricultural South detested it, as it traded heavily with Europe. The South called it the "Tariff of Abominations."
The concept of nullification, that states had the right to nullify any federal law which it deemed went against its interests, had first appeared in the Virginia and Kentucky Resolutions in 1798. In response to the tariff, South Carolina declared it null and void. Vice President John C. Calhoun agreed with this notion of states’ rights and encouraged South Carolina to take a stand on the tariff issue.
Up until that point, no one was sure where Jackson stood on the issue of states' rights. Then, in April, 1830, he announced that he opposed states rights in this instance. While President Jackson sympathized with the South's position on the tariff, he believed in a strong union with central power. As a result, a deep rivalry developed between Jackson and Calhoun. The rivalry can be epitomized in an incident at the Jefferson Day dinner, April 13, 1830, in which South Carolina Senator Robert Hayne made a toast to "The Union of the States, and the Sovereignty of the States." President Jackson added (and clearly directed towards the vice president), "Our federal Union: It must be preserved!" To this, Vice President Calhoun responded: "The Union: Next to our Liberty, the most dear!" In 1831, the first ever Democratic National Convention was held, and former Secretary of State Martin Van Buren (who was still playing a vital role in the President's "kitchen cabinet") was selected to replace Calhoun as the nominee for vice president in the 1832 election. The vice president resigned in December 1832 to run for the South Carolina U.S. Senate seat.
The South would not compromise on this lower tax, and South Carolina passed the Nullification Act which proclaimed that the state would no longer pay the "illegal" tariffs. South Carolina threatened to secede from the Union if the federal government tried to interfere.
President Jackson continued to oppose nullification, stating that "The Constitution... forms a government not a league... To say that any State may at pleasure secede from the Union is to say that the United States is not a nation." In 1832 he asked Congress to pass a "force bill," authorizing the use of military force to enforce the tariff law. The bill was held up in Congress until the great compromiser Henry Clay and the protectionists agreed to a Compromise Tariff bill. The Compromise Tariff contained a lower but still fairly high tariff. Both bills passed on March 1, 1833, and the president signed both.
In the face of the threat of military force, South Carolina quickly agreed to the lower compromise tariff and abolished the Nullification Act. The crisis was averted for another day.
Indian Policies
The lives of the Indians became even more troubled once a man who had been a friend, Andrew Jackson, became President of the United States. In 1830, Jackson signed the Indian Removal Act, this act removed any claims of the indigenous people to the land. This affected five tribes of the east—Cherokee, Creek, Chickasaw, Choctaw, and Seminole. The act would subsequently resettle them into a designated Indian Territory, which would be where Oklahoma is today. In spite of legal protests from tribal leaders, Jackson wanted them gone, even if he had to use military force in order to do it. Jackson decided that the Indians were a threat to national security. He also had a personal financial stake in some of the territory in question. With each treaty signed by the Native Americans, a white investor could purchase the ceded lands for themselves. In spite of this, the Cherokee, continued to pursue justice legally and actually was victorious in 1832 when the U.S. Supreme Court declared that the individual states had no jurisdiction within tribal lands. Jackson, argued that Indian removal was in the national interest, and ignored the ruling. The aftermath becomes known as the Trail of Tears. The Trail of Tears begins on October 8, 1832 and lasts for years. Indians are forced off of their land and migrate to the west.
Second Bank of the United States
The Second Bank of the United States began about 5 years after the First Bank of the United States fell. The Second Bank of the United States began in the same place as the first, Carpenters' Hall, Philadelphia. It had many branches through out the nation. Many of the same men from the First Bank ran the Second bank, when they refused to renew the charter of the First Bank. The main reason the Second Bank rose was because of the war of 1812. The U.S. suffered a horrible inflation and had trouble financing military operations. Like the first, bank many speculators believed the bank was corrupt. Second Bank ending up suffering from similar issues as the first bank and was ultimately disposed.
Gag Rule
The Gag Rule of 1836 is a rule that limits or forbids the raising, consideration or discussion of a particular topic by members of a legislative or decision-making body. The term originated in the mid-1830s, about 1836, when the U.S. House of Representatives barred discussion or referral to committee of antislavery petitions. The gag rule was supported by people involved in proslavery. The gag rule helped limit the progression of antislavery petitions. After antislavery petitions begin to emerge the democrats create initial gag orders to prevent the petitions.John Quincy Adams opposed the gag rule and said that limited and diregarded basic civil rights for free citizens. John Quincy Adams was member of the whig party with numerous pushed to eliminated the gag rule.
Panic of 1837
The Panic of 1837 was a financial crisis in the United States because of frequent actions of buying and selling housing properties.This all happened in New York City, on May 10, 1837, when every bank began to accept payment in Specie(Gold and Silver). This also happened due to when President Andrew Jacksons came up with the Specie Circular and when he refused to renew the charter of the Second Bank.
Reform and American Society
The Second Great Awakening
The Second Great Awakening was a religious movement during the early 19th century in the United States. Which showed Arminian Theology, by which every person could be saved through revivals.The Awakening grew largely in opposition to deism related to the French Revolution. The Second Awakening grew after a revival in Uttica New York which was hosted by Charles Grandison Finney. Finney believed spoke to congregations in America stating that people were " moral free agents." Charles spoke about Calvinist beliefs and that everyone had a defined destiny. Awakenings were described as spiritual and religious revivals where people will congregate and confess to their sins. By 1831 church membership had grew by 100,000 solely as a result to awakenings that were carried out by preachers like Charles Finney and Theodore Weld.
Throughout the late 1700s and 1800s, alcoholism became an increasing problem, and as a result, temperance groups began forming in several states to reduce the consumption of alcohol. Although the temperance movement began with the intent of limiting use, some temperance leaders such as Connecticut minister Lyman Beecher began urging fellow citizens to abstain from drinking in 1825. In 1826, the American Temperance Society formed in a resurgence of religion and morality. By the late 1830s, the American Temperance Society had membership of 1,500,000, and many Protestant churches began to preach temperance.
Public Education
In the New England states, public education was common, even though it was class-based with the working class receiving minimum benefits. Schools taught religious values and also taught Calvinist philosophies of discipline, including corporal punishment and public humiliation.In 1833 Oberlin college had in attendance 29 men and 15 women. Oberlin college came to be known the first college that allowed women attend. Within five years, thirty-two boarding schools enrolled Indian students. They substituted English for American Indian languages and taught Agriculture alongside the Christian Gospel. Horace Mann was considered “The Father of American Education.” He wanted to develop a school that would help to get rid of the differences between boys and girls when it came to education. He also felt that this could help keep the crime rate down. He was the first Secretary for the Board of Education in Massachusetts in 1837-1848. He also helped to established the first school for the education of teachers in America in 1839.
Asylum Movement
The Asylum Movement was a social conscience that was increased in the early 19th century the helped raise the awareness of mental illness and its treatment.] The first asylum in America was in 1817 near Frankfort, PA. Later in 1817 another asylum emerged in Hartford, Connecticut. The asylums grew popularity and influenced other states to create asylums similar, like Massachusetts, Massachusetts State Lunatic Hospital in 1833. Prior to 1840 only wealthy people were permitted to the asylums. Many people that were mentally ill who did not have finances, were permitted to jails and almshouses.
Abolitionism is the movement for which the purpose is to abolish slavery. While many believed in the injustices that those in the south believed, there were also those who opposed the heinous acts they did to African Americans. There were many people involved in helping slaves to escape to freedom. The movement was expanding. As it got bigger and bigger the hostilities between the north and the south grew as well. The Underground Railroad stemmed from the hearts and minds of these abolitionist freedom fighters. Harriet Tubman and Frederick Douglass were two popular African Americans who were a part of the abolitionist movement.
- A People and A Nation, Eighth Edition
- A People and A Nation, Eighth Edition
- A People and A Nation, Eighth Edition | http://en.wikibooks.org/wiki/US_History/War,_Nationalism,_and_Division | 13 |
42 | There are two main types of taxes (1)
direct tax and (2) indirect tax.
Explanation of Direct Tax:
A tax is said to be direct tax
impact and Incidence of a tax are on one and same person, i.e., when a
person on whom tax is levied is the same who finally bears the! burden of tax.
For Instance, income tax is a direct
tax because impact and incidence falls on the same person.
If impact of tax falls on one persons and incidence on the another,
the tax is called indirect.
For example, tax on saleable articles is
usually an indirect tax because it can be shifted on to the consumers.
Merits of Direct Tax:
(i) Direct taxes afford a greater degree of progression. They are, therefore,
(ii) They entail less expenses on collection and as such are economical.
(iii) They satisfy canons of
certainty, elasticity, productivity and simplicity.
(iv) Another advantage of direct taxes is
that they create civic consciousness in people. When a person has to bear burden
of tax, he takes active interest in affairs of state.
Demerits of Direct
(i) It is easy to evade a direct tax than an indirect tax. Taxpayer is seldom
happy when he pays tax. It pinches him that his hard-earned money is being taken
by government. So he often submits false statements of his income and thus tries
to evade tax. Direct tax is in fact a tax of honesty.
(ii) Direct tax is very inconvenience because taxpayer has to prepare lengthy
statements of his income and expenditure. He has to keep a record of his income
up-to-date throughout the year. It is very laborious for taxpayer to prepare and
keep these records.
(iii) Direct tax is to be paid in
lump some every year while income which a person earns is received in small
amounts. It often becomes difficult by taxpayers to pay large amounts in one
Indirect taxes are those taxes which are paid in the first instance by one
person and then are shifted on to some other persons. The impact is one person
but the incidence is on the other.
Merits of Indirect Tax:
(i) It is not possible to evade indirect tax. The only way to avoid this tax
is not to buy taxed commodities.
(ii) They are more convenient because they
are wrapped in prices. Consumer
often does not know that he is paying tax.
(iii) Another advantage of tax is that every member of society contributes
something towards revenue of state.
(iv) Indirect tax is also elastic to a certain extent. State can increase its
revenue within limits by increasing rates of taxes.
(v) If state wishes to discourage consumption of intoxicants and harmful
drugs, it can raise their prices by taxing them. This is a great social
advantage which a community can achieve from tax.
Demerits of Indirect Tax:
(i) A very serious objection leveled against indirect taxation is that it is
regressive in character. It is inequitable. Burden of tax falls more on poor
people than on rich.
(ii) Indirect tax is also uneconomical. State has to spend large amounts of
money on collection of taxes.
(iii) Revenue from indirect tax is uncertain. State cannot correctly estimate as
to how much money will it receive from this tax.
(iv) As lax is wrapped up in prices; therefore, it does not create civic
(v) If goods produced by manufacturers are taxed at higher rates, it hampers
trade and industry and causes widespread unemployment in the country.
After discussing merits and demerits of two types of taxes, we come to
conclusion that for reducing inequality of income and raising sufficient funds
for state, both these taxes are essential, A country should not place exclusive
reliance on any one type, but should employ both these forms of taxation.
agree here with Galdston when he says:
"Direct and Indirect taxes
are like two equally fair sisters to whom as Chancellor of Exchequer, he had to
pay equal addresses".
In recent times, however, there has
been a slight change in utilization of both these types of taxes. Every state,
in order to reduce inequality of income, is trying to raise major portion of its
income from direct taxes. | http://www.economicsconcepts.com/direct_tax_and_indirect_tax.htm | 13 |
37 | Money Connection—Unit 3
This lesson focuses on the impact of too much money or too little money flowing in the economy in terms of jobs, prices, and production of goods and services. A simulation is used to demonstrate the impact of inflation on the economy.
Money Connection Video
The Money Connection is a lively, two-part video (approximately 17 minutes) designed to introduce fourth through sixth grade audiences to the Federal Reserve System. The fast-paced, news show format combines historical photographs and live-action footage with interviews and animation sequences for a close up look at the history and important responsibilities and functions of the Federal Reserve.
The "What is a dollar worth?" calculator allows you to compare prices for goods or services from different periods of time. For example, you can compare the price you paid for a dozen eggs in 1972 with the price you paid last week.
Time Value of Money Online Learning Module
This online learning module helps students learn about the time value of money, opportunity costs, interest and inflation. The module also focuses on mathematical components and calculations of the related time value formulas.
Consumer Price Index Video
This Drawing Board video explains the Median Consumer Price Index (CPI) and how it is used to gauge inflation.
Other Inflation Teaching Ideas
Find out what methods other educators in the Southeast have used to teach concepts related to inflation.
The Fed Today—Lesson Four: The Fed's Role in Making and Setting Monetary Policy
This lesson focuses on price stability and inflation. Students discuss how to define inflation and analyze the relationship between the money supply and the price level using the Fisher Equation. Students then examine the harmful effects of inflation on the economy. Finally, small groups of students determine how business and consumer behavior changed during the 1970s when inflation had a negative impact on the nation's economy.
A Lesson to Accompany "Benjamin Franklin and the Birth of a Paper Money Economy"
In this lesson, students learn about the role of money in the colonial economy by participating in a trading activity in which they observe the effects of too little money on trade within a colony. In the final activity, students learn how too much money can lead to inflation. Related essay
Advanced High School to College
The Economy: Crisis and Response—The Road Ahead
As the economy moves toward recovery, the Fed will remain active in its response, including unwinding certain policies as conditions warrant. This website examines the economic outlook, why inflation is a topic of focus, and the changes in regulation needed to strengthen the financial markets.
The Inflation Project
Tracking inflation and its effects is a vital component of the Federal Reserve's monetary policy. The Inflation Project regularly compiles links to data releases, reports, research, and international inflation updates.
The Summary of Commentary on Current Economic Conditions, commonly known as the Beige Book, gathers anecdotal information on current economic conditions in each Federal Reserve District through reports from Bank and Branch directors and interviews with key business contacts, economists, market experts, and other sources. The Beige Book summarizes this information by District and sector. | http://www.frbatlanta.org/edresources/classroomeconomist/inflation_resources.cfm | 13 |
15 | Traditional methods of food drying is to spread the foodstuffs to place the foodstuffs in the sun in the open air. This method, called sun drying, is effective for small amounts of food. The area needed for sun drying expands with food quantity and since the food is placed in the open air, it is easily contaminated. Therefore, one major reason why sun drying is not easily performed with larger quantities of food is that the monitoring and overview becomes increasingly more difficult with increasing food quantities.
In contrast to sun drying, where the meat is exposed directly to the sun, the solar drying method uses indirect solar radiation. The principle of the solar drying technique is to collect solar energy by heating-up the air volume in solar collectors and conduct the hot air from the collector to an attached enclosure, the meat drying chamber. Here the products to be dried are laid out.
In this closed system, consisting of a solar collector and a meat drying chamber, without direct exposure of the meat to the environment, meat drying is more hygienic as there is no secondary contamination of the products through rain, dust, insects, rodents or birds. The products are dried by hot air only. There is no direct impact of solar radiation (sunshine) on the product. The solar energy produces hot air in the solar collectors. Increasing the temperature in a given volume of air decreases the relative air humidity and increases the water absorption capacity of the air. A steady stream of hot air into the drying chamber circulating through and over the meat pieces results in continuous and efficient dehydration.
The solar dryer is a relatively simple concept. The basic principles employed in a solar dryer are:
- Converting light to heat: Any black on the inside of a solar dryer will improve the effectiveness of turning light into heat.
- Trapping heat: Isolating the air inside the dryer from the air outside the dryer makes an important difference. Using a clear solid, like a plastic bag or a glass cover, will allow light to enter, but once the light is absorbed and converted to heat, a plastic bag or glass cover will trap the heat inside. This makes it possible to reach similar temperatures on cold and windy days as on hot days.
- Moving the heat to the food. Both the natural convection dryer and the forced convection dryer use the convection of the heated air to move the heat to the food.
There are a variety of solar dryer designs. Principally, solar dryers can be categorized into three groups: a) natural convection dryers, which are solar dryers that use the natural vertical convection that occurs when air is heated and b) forced convection dryers, in which the convection is forced over the food through the use of a fan and c) tunnel dryers.
While several different designs of the solar dryers exist, the basic components of a solar dryer are illustrated in Figure 1. In the case of a forced convection dryer, an additional component would be the fan.
The structure of a tunnel dryer is relatively simple. The basic design components of a tunnel dryer are the following:
- A semi circular shaped solar tunnel in the form of a poly house framed structure with UV stabilized polythene sheet
- The structure is, in contrast to the other dryer designs, large enough for a person to enter
The design of a tunnel dryer is illustrated in Figure 2. In addition, the technology teaser image at the top of this description is an image of the inside of a tunnel dryer.
Natural Convection Dryer Large scale design
Generally, natural convection dryers are sized appropriately for on-farm use. One design that has undergone considerable development by the Asian Institute of Technology in Bangkok, Thailand is shown in Figure 3. This natural covenction dryer is a large scale structure: the collector is 4.5 meters long and 7 meters wide and the drying bin is 1 meter long and 7 meters wide. The structure consists of three main components: a solar collector, a drying bin and a solar chimney. The drying bin in this design is made of bamboo matting. In addition to the collector, air inside the solar chimney is heated which also increases the thermal draught through the dryer. The solar chimney is covered with black plastic sheet in order to increase the thermal absorption. A disadvantage of the dryer is its high structural profile which poses stability problems in windy conditions, and the need to replace the plastic sheet every 1-2 years.
Figure 4 shows a smaller design for a natural convection dryer. The capacity of this dryer is ten times smaller than the capacity for food drying in the larger design. However, the design is simple to build and is less susceptible to stability problems.
Natural Convection dryer small scale design
These solar food dryers are basically wooden boxes with vents at the top and bottom. Food is placed on screened frames which slide into the boxes. A properly sized solar air heater with south-facing plastic glazing and a black metal absorber is connected to the bottom of the boxes. Air enters the bottom of the solar air heater and is heated by the black metal absorber. The warm air rises up past the food and out through the vents at the top (see Figure 5). While operating, these dryers produce temperatures of 130–180° F (54–82° C), which is a desirable range for most food drying and for pasteurization. With these dryers, it’s possible to dry food in one day, even when it is partly cloudy, hazy, and very humid. Inside, there are thirteen shelves that will hold 35 to 40 medium sized apples or peaches cut into thin slices.
In the case of forced convection dryers, the structure can be relatively similar. However, the forced convection dryer requires a power source for the fans to provide the air flow. The forced convection dryer doesn't require an incline for the air flow however, the collector can be placed horizontallly with the fan at one end and the drying bin at the other end. In addition, the forced convection dryer is less dependent on solar energy as it provides the air flow itself; this allows the design to work in weather conditions in which the natural convection dryer doesn't work. As inadequate ventilation is a primary cause of loss of food in solar food dryers, and is made worse by intermittent heating, it is essential to realize proper ventilation. Adding a forced convection flow, for instance provided through a PV- solar cell connected to a fan, will prevent the loss of food.
Drying is an important step in the food production process. The main argument for food drying is to preserve the food for longer periods of time. However, it is important to note that the process is not just concerned with the removal of moisture content from the food. Additional quality factors are influenced by the selection of drying conditions and equipment:
- Moisture Content. It is essential that the foodstuff after drying is at a moisture content suitable for storage. The desired moisture content will depend on the type of food, duration of storage and the storage conditions available. The drying operation is also essential in minimizing the range of moisture levels in the batch of food as portions of under-dried food can lead to deterioration of the entire batch.
- Nutritive value. Food constituents can be adversely affected when excessive temperatures are reached.
- Mould growth. The rate of development of micro-organisms is dependent on the food moisture content, temperature and the degree of physical damage to the food.
- Appearance and smell of the food. For example, the colour of milled rice can be adversely affected if the paddy is dried with direct heated dryers with poorly maintained or operated burners or furnaces.
Therefore, it is essential to not only monitor the moisture content of the foodstuffs, but to also monitor temperature, mould growth, appearance and smell of food, air flow, etc. Whether a natural convection dryer, a forced convection dryer or a tunnel dryer is appropriate depends on the amount of food, the climate and the demands placed on the end-product (how long does it need to be stored, in what quantities, etc.). A typical pattern of several of these factors is shown in Figure 6.
In addition, an important feature of solar drying devices is the size of the solar collectors. Depending on the quantity of goods to be dried, collectors must have the capacity to provide sufficient quantities of hot air to the drying chamber. Collectors which are too small in proportion to the amount of food to be dried will result in failed attempts and spoiled food.
According to the FAO (no date), the most common drying method of grain in tropical developing countries is sun drying. The process of sun drying starts when the crop is standing in the field prior to harvest; maize may be left on the standing plant for several weeks after attaining maturity. However, this may render the grain subject to insect infestation and mould growth. In addition, it prevents the land being prepared for the next crop and is vulnerable to theft and damage from animals.
A more controlled practice is to bring the foodstuffs into a structure which is specifically designed for food drying. This removes the issue of bacterial contamination, theft and insect infestation. Modern variations are to dry food in special enclosed drying racks or cabinets and expose the food to a flow of dry air heated by electricity, propane or solar radiation.
Although it is difficult to establish the current status of the technology in terms of market penetration as data on this technology is insufficient, some general remarks can be made about the market potential.
There seem to be no major design barriers to a solar dryer: the design is easy to build with a minimum of materials required. This is especially true for the natural convection dryer, which doesn't require any machinery or energy source (next to the solar energy source). In contrast, the forced convection flow, the electricity heated design and the propane fuelled dryers all require some form of machinery and require an external heat source (in the form of electricity or propane). This complicates their designs and makes their operational costs more expensive. However, these designs possibly do have lower food loss rates due to more constant air flow.
Related to the previous remark, the easy design cuts costs. The design can be made primarily from materials found in the local surroundings. For instance the frame of the structure can be constructed from wood, bamboo or any other natural product that is strong enough. This characteristic enhances the market potential of this product.
The technology provides several socio-economic benefits. As the FAO (2010) notes, one of the main issues facing developing countries today is the issue of food security. The solar food dryer can improve food security through allowing the longer storage of food after drying compared to food that hasn't been dried.
The solar dryer can save fuel and electricity when it replaces dryer variations that require an external energy source in the form of electricity or fossil fuel. In addition solar food dryers cut drying times in comparison to sun drying. While fossil fuel or electrically powered dryers might provide certain benefits (more consistent air flow and higher temperatures), the financial barriers that these technologies provide might be too high for marginal farmers. For instance, electricity might be not available or too expensive and fossil fuel powered drying might pose large initial and running costs.
Fruits, vegetables and meat dried in a solar dryer are better in quality and hygiene compared to fruits, vegetables and meat dried in sun drying conditions. As mentioned, due to the closed system design, contamination of food is prevented or minimized. In addition, the food is not vulnerable to rain and dust, compared to the open system design of sun drying.
In rural areas where farmers grow fruits and vegetables without proper food drying facilities, the farmers need to sell the food in the market shortly after harvesting. When food production is high, the farmers have to sell the food at low price to prevent the food from losing value through decomposition. Therefore, the solar food dryer might be able to prevent the financial losses farmers in these situations face. Dried food can be stored longer and retain quality longer. Moreover, dried fruits and vegetables might be sold as differentiated products which possibly enhances their market value. For example, dried meat can be processed into a variety of different products.
Drying food reduces its volume. Therefore, in combination to longer storage times, the food is also more easily transported after drying which potentially opens up additional markets to the producer of the food.
While there is insufficient data at the moment to elaborate fully on the financial requirements and costs of this technology, certain general remarks can be made.
For natural convection dryers, the financial requirements are low. The structure is made from components that are mostly easily available (wood, bamboo, other strong construction materials). However, the major cost components are likely to be the glass sheets required to trap the heat, and the plastic sheets need to be available. Operational costs of the natural convection technology are limited to labour costs. Forced convection dryers have higer initial costs and higher operational costs, as the fan needs to be purchased and operated.
As mentioned, dried food products might yield a higher price on the market as it can be sold out-of-season (the fresh food version might no longer be on the market in a particular season, which might increase the price of the dried version of the food.
FAO, 2010. “Climate-Smart” Agriculture - Policies, Practices and Financing for Food Security, Adaptation and Mitigation. Food and Agriculture Organization of the United Nations 2010. Document can be found at: http://www.fao.org
FAO, no date. Information retrieved from the following websites: http://www.fao.org/docrep/t0395e/T0395E04.htm , http://www.fao.org/docrep/x0209e/x0209e06.htm and http://www.fao.org/docrep/t1838e/T1838E0v.htm | http://climatetechwiki.org/print/technology/jiqweb-edf | 13 |
19 | The American Civil Rights Movement (1955–1968) refers to the reform movements in the United States aimed at abolishing racial discrimination against African Americans and restoring suffrage in Southern states. This article covers the phase of the movement between 1954 and 1968, particularly in the South. By 1966, the emergence of the Black Power Movement, which lasted roughly from 1966 to 1975, enlarged the aims of the Civil Rights Movement to include racial dignity, economic and political self-sufficiency, and freedom from oppression by whites.
Many of those who were most active in the Civil Rights Movement, with organizations such as SNCC, CORE and SCLC, prefer the term "Southern Freedom Movement" because the struggle was about far more than just civil rights under law; it was also about fundamental issues of freedom, respect, dignity, and economic and social equality.
After the disputed election of 1876 and the end of Reconstruction, White Americans in the South resumed political control of the region under a one-party system of Democratic control. The voting rights of blacks were increasingly suppressed, racial segregation imposed, and violence against African Americans mushroomed. This period is often referred to as the "nadir of American race relations," and while it was most intense in the South to a lesser degree it affected the entire nation.
The system of overt, state-sanctioned racial discrimination and oppression that emerged out of the post-Reconstruction South and spread nation-wide became known as the "Jim Crow" system, and it remained virtually intact into the early 1950s. Systematic disenfranchisement of African Americans took place in Southern states at the turn of the century and lasted until national civil rights legislation was passed in the mid-1960s. For more than 60 years, they were not able to elect one person in the South to represent their interests. Because they could not vote, they could not sit on juries limited to voters. They had no part in the justice system or law enforcement, although in the 1880s, they had held many local offices, including that of sheriff.
African-Americans and other racial minorities rejected this regime. They resisted it and sought better opportunities through lawsuits, new organizations, political redress, and labor organizing (see the American Civil Rights Movement 1896-1954). The National Association for the Advancement of Colored People (NAACP) was founded in 1909 and it struggled to end race discrimination through litigation, education, and lobbying efforts. Its crowning achievement was its legal victory in the Supreme Court decision Brown v. Board of Education (1954) that rejected separate white and colored school systems and by implication overturned the "separate but equal" doctrine established in Plessy v. Ferguson.
Since the situation for blacks outside the South was somewhat better (in most states they could vote and have their children educated, though they still faced discrimination in housing and jobs), from 1910-1970, African Americans sought better lives by migrating north and west in the millions, a huge population movement collectively known as the Great Migration.
Invigorated by the victory of Brown and frustrated by its lack of immediate practical effect, private citizens increasingly rejected gradualist, legalistic approaches as the primary tool to bring about desegregation in the face of "massive resistance" by proponents of racial segregation and voter suppression. In defiance, they adopted a combined strategy of direct action with nonviolent resistance known as civil disobedience, giving rise to the African-American Civil Rights Movement of 1955-1968.
During the period 1955-1968, acts of civil disobedience produced crisis situations between protesters and government authorities. The authorities of federal, state, and local governments often had to respond immediately to crisis situations which highlighted the inequities faced by African Americans. Forms of civil disobedience included boycotts, beginning with the successful Montgomery Bus Boycott (1955-1956) in Alabama; "sit-ins" such as the influential Greensboro sit-in (1960) in North Carolina; and marches, such as the Selma to Montgomery marches (1965) in Alabama.
Noted legislative achievements during this phase of the Civil Rights Movement were passage of Civil Rights Act of 1964, that banned discrimination in employment practices and public accommodations; the Voting Rights Act of 1965, that restored and protected voting rights; the Immigration and Nationality Services Act of 1965, that dramatically opened entry to the U.S. to immigrants other than traditional European groups; and the Civil Rights Act of 1968, that banned discrimination in the sale or rental of housing.
Churches, the centers of their communities, and local grassroots organizations mobilized volunteers to participate in broad-based actions. This was a more direct and potentially more rapid means of creating change than the traditional approach of mounting court challenges.
The Montgomery Improvement Association—created to lead the boycott—managed to keep the boycott going for over a year until a federal court order required Montgomery to desegregate its buses. The success in Montgomery made its leader Dr. Martin Luther King a nationally known figure. It also inspired other bus boycotts, such as the highly successful Tallahassee, Florida, boycott of 1956-1957.
In 1957 Dr. King and Rev. John Duffy, the leaders of the Montgomery Improvement Association, joined with other church leaders who had led similar boycott efforts, such as Rev. C. K. Steele of Tallahassee and Rev. T. J. Jemison of Baton Rouge; and other activists such as Rev. Fred Shuttlesworth, Ella Baker, A. Philip Randolph, Bayard Rustin and Stanley Levison, to form the Southern Christian Leadership Conference. The SCLC, with its headquarters in Atlanta, Georgia, did not attempt to create a network of chapters as the NAACP did. It offered training and leadership assistance for local efforts to fight segregation. The headquarters organization raised funds, mostly from northern sources, to support such campaigns. It made non-violence both its central tenet and its primary method of confronting racism.
In 1959, Septima Clarke, Bernice Robinson, and Esau Jenkins, with the help of the Highlander Folk School in Tennessee, began the first Citizenship Schools in South Carolina's Sea Islands. They taught literacy to enable blacks to pass voting tests. The program was an enormous success and tripled the number of black voters on St. John Island. SCLC took over the program and duplicated its results elsewhere.
One of Martin Luther King's strategies was to challenge mainstream America on moral grounds to end the racial abuse and segregation in the South. The medium of television was particularly effective at conveying the news about the conditions of the quality of life for African Americans in the South. The news broadcasts and documentary film making were the first forms for presenting these stories. Later in the 1970s, the film "Roots" by Alex Haley was said to be a turning point in mainstream America's ability to relate to the stresses and particularities of African American history.
On December 1, 1955, Rosa Parks (the "mother of the Civil Rights Movement") refused to get up out of her seat on a public bus to make room for a white passenger. She was secretary of the Montgomery NAACP chapter and had recently returned from a meeting at the Highlander Center in Tennessee where nonviolent civil disobedience as a strategy had been discussed. Parks was arrested, tried, and convicted for disorderly conduct and violating a local ordinance. After word of this incident reached the black community, 50 African-American leaders gathered and organized the Montgomery Bus Boycott to protest the segregation of blacks and whites on public buses. With the support of most of Montgomery's 50,000 African Americans, the boycott lasted for 381 days until the local ordinance segregating African-Americans and whites on public buses was lifted. Ninety percent of African Americans in Montgomery took part in the boycotts, which reduced bus revenue by 80%. A federal court ordered Montgomery's buses desegregated in November 1956, and the boycott ended in triumph. (W. Chafe, The Unfinished Journey,
A young Baptist minister named Martin Luther King, Jr., was president of the Montgomery Improvement Association, the organization that directed the boycott. The protest made King a national figure. His eloquent appeals to Christian brotherhood and American idealism created a positive impression on people both inside and outside the South.
Little Rock, Arkansas, was in a relatively progressive southern state. A crisis erupted, however, when Governor of Arkansas Orval Faubus called out the National Guard on September 4 to prevent the nine African-American students who had sued for the right to attend an integrated school, Little Rock Central High School. The nine students had been chosen to attend Central High because of their excellent grades. On the first day of school, only one of the nine students showed up because she did not receive the phone call about the danger of going to school. She was harassed by White Americans outside the school, and the police had to take her away in a patrol car to protect her. Afterwards, the nine students had to carpool to school and be escorted by military personnel in jeeps.
Faubus was not a proclaimed segregationist. The Arkansas Democratic Party, which then controlled politics in the state, put significant pressure on Faubus after he had indicated he would investigate bringing Arkansas into compliance with the Brown decision. Faubus then took his stand against integration and against the Federal court order that required it.
Faubus' order received the attention of President Dwight D. Eisenhower, who was determined to enforce the orders of the Federal courts. Critics had charged he was lukewarm, at best, on the goal of desegregation of public schools. Eisenhower federalized the National Guard and ordered them to return to their barracks. Eisenhower then deployed elements of the 101st Airborne Division to Little Rock to protect the students.
The students were able to attend high school. They had to pass through a gauntlet of spitting, jeering whites to arrive at school on their first day, and to put up with harassment from fellow students for the rest of the year. Although federal troops escorted the students between classes, the students were still teased and even attacked by white students when the soldiers weren't around. One of the Little Rock Nine, Minnijean Brown, was expelled for spilling a bowl of chili on the head of a white student who was allegedly harassing her in the school lunch line.
Only one of the Little Rock Nine, Ernest Green, got the chance to graduate; after the 1957-58 school year was over, the Little Rock school system decided to shut public schools completely rather than continue to integrate. Other school systems across the South followed suit.
The Civil Rights Movement received an infusion of energy with a student sit-in at a Woolworth's store in Greensboro, North Carolina. On February 1, 1960, four students Ezell A. Blair Jr. (now known as Jibreel Khazan), David Richmond, Joseph McNeil, and Franklin McCain from North Carolina Agricultural & Technical College, an all-black college, sat down at the segregated lunch counter to protest Woolworth's policy of excluding African Americans. These protesters were encouraged to dress professionally, to sit quietly, and to occupy every other stool so that potential white sympathizers could join in. The sit-in soon inspired other sit-ins in Richmond, Virginia; Nashville, Tennessee; and Atlanta, Georgia. As students across the south began to "sit-in" at the lunch counters of a few of their local stores, local authority figures sometimes used brute force to physically escort the demonstrators from the lunch facilities.
The "sit-in" technique was not new— as far back as 1942, the Congress of Racial Equality sponsored sit-ins in Chicago, St. Louis in 1949 and Baltimore in 1952. In 1960 the technique succeeded in bringing national attention to the movement. The success of the Greensboro sit-in led to a rash of student campaigns throughout the South. Probably the best organized, most highly disciplined, the most immediately effective of these was in Nashville, Tennessee. By the end of 1960, the sit-ins had spread to every southern and border state and even to Nevada, Illinois, and Ohio.
Demonstrators focused not only on lunch counters but also on parks, beaches, libraries, theaters, museums, and other public places. Upon being arrested, student demonstrators made "jail-no-bail" pledges, to call attention to their cause and to reverse the cost of protest, thereby saddling their jailers with the financial burden of prison space and food.
In 1960 activists who had led these sit-ins formed the Student Nonviolent Coordinating Committee (SNCC) to take these tactics of nonviolent confrontation further. Student Nonviolent Coordinating Committee Founded ~ Civil Rights Movement Veterans
Freedom Rides were journeys by Civil Rights activists on interstate buses into the segregated southern United States to test the United States Supreme Court decision Boynton v. Virginia, (1960) 364 U.S. that ended segregation for passengers engaged in inter-state travel. Organized by CORE, the first Freedom Ride of the 1960s left Washington D.C. on May 4, 1961, and was scheduled to arrive in New Orleans on May 17.
During the first and subsequent Freedom Rides, activists traveled through the Deep South to integrate seating patterns and desegregate bus terminals, including restrooms and water fountains. That proved to be a dangerous mission. In Anniston, Alabama, one bus was firebombed, forcing its passengers to flee for their lives. In Birmingham, Alabama, an FBI informant reported that Public Safety Commissioner Eugene "Bull" Connor gave Ku Klux Klan members 15 minutes to attack an incoming group of freedom riders before having police "protect" them. The riders were severely beaten "until it looked like a bulldog had got a hold of them."
Mob violence in Anniston and Birmingham temporarily halted the rides until SNCC activists arrived in Birmingham to resume them. In Montgomery, Alabama a mob charged another bus load of riders, knocking John Lewis unconscious with a crate and smashing Life photographer Don Urbrock in the face with his own camera. A dozen men surrounded Jim Zwerg, a white student from Fisk University, and beat him in the face with a suitcase, knocking out his teeth.
The freedom riders continued their rides into Jackson, Mississippi, where they were arrested for "breaching the peace" by using "white only" facilities. New freedom rides were organized by many different organizations. As riders arrived in Jackson, they were arrested. By the end of summer, more than 300 had been jailed in Mississippi.
The jailed freedom riders were treated harshly, crammed into tiny, filthy cells and sporadically beaten. In Jackson, Mississippi, some male prisoners were forced to do hard labor in 100-degree heat. Others were transferred to Mississippi State Penitentiary at Parchman, where their food was deliberately oversalted and their mattresses were removed. Sometimes the men were suspended by "wrist breakers" from the walls. Typically, the windows of their cells were shut tight on hot days, making it hard for them to breathe.
Eventually, public sympathy and support for the freedom riders forced the Kennedy administration to order the Interstate Commerce Commission (ICC) to issue a new desegregation order. When the new ICC rule took effect on November 1st, passengers were permitted to sit wherever they chose on the bus; "white" and "colored" signs came down in the terminals; separate drinking fountains, toilets, and waiting rooms were consolidated; and lunch counters began serving people regardless of skin color.
The student movement involved such celebrated figures as John Lewis, the single-minded activist who "kept on" despite many beatings and harassments; James Lawson, the revered "guru" of nonviolent theory and tactics; Diane Nash, an articulate and intrepid public champion of justice; Bob Moses, pioneer of voting registration in Mississippi—the most rural and most dangerous part of the South; and James Bevel, a fiery preacher and charismatic organizer and facilitator. Other prominent student activists included Charles McDew; Bernard Lafayette; Charles Jones; Lonnie King; Julian Bond (associated with Atlanta University); Hosea Williams; and Stokely Carmichael (who later changed his name to Kwame Ture).
After the Freedom Rides, local black leaders in Mississippi such as Amzie Moore, Aaron Henry, Medgar Evers, and others asked SNCC to help register black voters and build community organizations that could win a share of political power in the state. Since Mississippi ratified its constitution in 1890, with provisions such as poll taxes, residency requirements, and literacy tests, it made registration more complicated and stripped blacks from the rolls. After so many years, the intent to stop blacks from voting had become part of the culture of white supremacy. In the fall of 1961, SNCC organizer Robert Moses began the first such project in McComb and the surrounding counties in the Southwest corner of the state. Their efforts were met with violent repression from state and local lawmen, White Citizens' Council, and Ku Klux Klan resulting in beatings, hundreds of arrests and the murder of voting activist Herbert Lee.
White opposition to black voter registration was so intense in Mississippi that Freedom Movement activists concluded that all of the state's civil rights organizations had to unite in a coordinated effort to have any chance of success. In February of 1962, representatives of SNCC, CORE, and the NAACP formed the Council of Federated Organizations (COFO). At a subsequent meeting in August, SCLC became part of COFO.
In the Spring of 1962, with funds from the Voter Education Project, SNCC/COFO began voter registration organizing in the Mississippi Delta area around Greenwood, and the areas surrounding Hattiesburg, Laurel, and Holly Springs. As in McComb, their efforts were met with fierce opposition — arrests, beatings, shootings, arson, and murder. Registrars used the literacy test to keep blacks off the voting roles by creating standards that highly educated people could not meet. In addition, employers fired blacks who tried to register and landlords evicted them from their homes. Over the following years, the black voter registration campaign spread across the state.
Similar voter registration campaigns — with similar responses — were begun by SNCC, CORE, and SCLC in Louisiana, Alabama, southwest Georgia, and South Carolina. By 1963, voter registration campaigns in the South were as integral to the Freedom Movement as desegregation efforts. After passage of the Civil Rights Act of 1964, protecting and facilitating voter registration despite state barriers became the main effort of the movement. It resulted in passage of the Voting Rights Act of 1965.
James Meredith won a lawsuit that allowed him admission to the University of Mississippi in September 1962. He attempted to enter campus on September 20, on September 25, and again on September 26, only to be blocked by Mississippi Governor Ross R. Barnett, who proclaimed that "no school will be integrated in Mississippi while I am your Governor."
After the Fifth U.S. Circuit Court of Appeals held both Barnett and Lieutenant Governor Paul B. Johnson, Jr. in contempt, with fines of more than $10,000 for each day they refused to allow Meredith to enroll, Meredith, escorted by a force of U.S. Marshals, entered the campus on September 30, 1962. White students and other whites began rioting that evening, throwing rocks at the U.S. Marshals guarding Meredith at Lyceum Hall, then firing on the marshals. Two people, including a French journalist, were killed; 28 marshals suffered gunshot wounds; and 160 others were injured. After the Mississippi Highway Patrol withdrew from the campus, President Kennedy sent the regular Army to the campus to quell the uprising. Meredith was able to begin classes the following day, after the troops arrived.
The SCLC, which had been criticized by some student activists for its failure to participate more fully in the freedom rides, committed much of its prestige and resources to a desegregation campaign in Albany, Georgia, in November 1961. King, who had been criticized personally by some SNCC activists for his distance from the dangers that local organizers faced—and given the derisive nickname "De Lawd" as a result—intervened personally to assist the campaign led by both SNCC organizers and local leaders.
The campaign was a failure because of the canny tactics of Laurie Pritchett, the local police chief, and divisions within the black community. The goals may not have been specific enough. Pritchett contained the marchers without violent attacks on demonstrators that inflamed national opinion. He also arranged for arrested demonstrators to be taken to jails in surrounding communities, allowing plenty of room to remain in his jail. Prichett also foresaw King's presence as a danger and forced his release to avoid King's rallying the black community. King left in 1962 without having achieved any dramatic victories. The local movement, however, continued the struggle, and it obtained significant gains in the next few years.
The Albany movement proved to be an important education for the SCLC, however, when it undertook the Birmingham campaign in 1963. The campaign focused on one goal—the desegregation of Birmingham's downtown merchants, rather than total desegregation, as in Albany. It was also helped by the brutal response of local authorities, in particular Eugene "Bull" Connor, the Commissioner of Public Safety. He had long held much political power, but had lost a recent election for mayor to a less rabidly segregationist candidate. Refusing to accept the new mayor's authority, Connor intended to stay in office.
The campaign used a variety of nonviolent methods of confrontation, including sit-ins, kneel-ins at local churches, and a march to the county building to mark the beginning of a drive to register voters. The city, however, obtained an injunction barring all such protests. Convinced that the order was unconstitutional, the campaign defied it and prepared for mass arrests of its supporters. King elected to be among those arrested on April 12, 1963.
While in jail, King wrote his famous Letter from Birmingham Jail on the margins of a newspaper, since he had not been allowed any writing paper while held in solitary confinement by jail authorities. Supporters pressured the Kennedy Administration to intervene to obtain King's release or better conditions. King eventually was allowed to call his wife, who was recuperating at home after the birth of their fourth child, and was released on April 19.
The campaign, however, was faltering because the movement was running out of demonstrators willing to risk arrest. SCLC organizers came up with a bold and controversial alternative, calling on high school students to take part in the demonstrations. More than one thousand students skipped school on May 2 to join the demonstrations, in what would come to be called the Children's Crusade. More than six hundred ended up in jail. This was newsworthy, but in this first encounter, the police acted with restraint. On the next day, however, another one thousand students gathered at the church. When they started marching, Bull Connor unleashed police dogs on them, then turned the city's fire hoses water streams on the children. Television cameras broadcast to the nation the scenes of water from fire hoses knocking down schoolchildren and dogs attacking individual demonstrators.
Widespread public outrage forced the Kennedy Administration to intervene more forcefully in the negotiations between the white business community and the SCLC. On May 10, the parties announced an agreement to desegregate the lunch counters and other public accommodations downtown, to create a committee to eliminate discriminatory hiring practices, to arrange for the release of jailed protesters, and to establish regular means of communication between black and white leaders.
Not everyone in the black community approved of the agreement— the Rev. Fred Shuttlesworth was particularly critical, since he had accumulated a great deal of skepticism about the good faith of Birmingham's power structure from his experience in dealing with them. The reaction from parts of the white community was even more violent. The Gaston Motel, which housed the SCLC's unofficial headquarters, was bombed, as was the home of King's brother, the Reverend A. D. King. Kennedy prepared to federalize the Alabama National Guard but did not follow through. Four months later, on September 15, Ku Klux Klan members bombed the Sixteenth Street Baptist Church in Birmingham, killing four young girls.
Other events of the summer of 1963:
On June 11, 1963, George Wallace, Governor of Alabama, tried to block the integration of the University of Alabama. President John F. Kennedy sent enough force to make Governor Wallace step aside, allowing the enrollment of two black students. That evening, JFK addressed the nation on TV and radio with a historic civil rights speech. The next day Medgar Evers was murdered in Mississippi. The next week as promised, on June 19, 1963, JFK submitted his Civil Rights bill to Congress.
A. Philip Randolph had planned a march on Washington, D.C., in 1941 in support of demands for elimination of employment discrimination in defense industries; he called off the march when the Roosevelt Administration met the demand by issuing Executive Order 8802 barring racial discrimination and creating an agency to oversee compliance with the order.
Randolph and Bayard Rustin were the chief planners of the second march, which they proposed in 1962. The Kennedy Administration applied great pressure on Randolph and King to call it off but without success. The march was held on August 28, 1963.
Unlike the planned 1941 march, for which Randolph included only black-led organizations in the planning, the 1963 march was a collaborative effort of all of the major civil rights organizations, the more progressive wing of the labor movement, and other liberal organizations. The march had six official goals: "meaningful civil rights laws, a massive federal works program, full and fair employment, decent housing, the right to vote, and adequate integrated education." Of these, the march's real focus was on passage of the civil rights law that the Kennedy Administration had proposed after the upheavals in Birmingham.
National media attention also greatly contributed to the march's national exposure and probable impact. In his section "The March on Washington and Television News," William Thomas notes: "Over five hundred cameramen, technicians, and correspondents from the major networks were set to cover the event. More cameras would be set up than had filmed the last Presidential inauguration. One camera was positioned high in the Washington Monument, to give dramatic vistas of the marchers". By carrying the organizers' speeches and offering their own commentary, television stations literally framed the way their local audiences saw and understood the event.
The march was a success, although not without controversy. An estimated 200,000 to 300,000 demonstrators gathered in front of the Lincoln Memorial, where King delivered his famous "I Have a Dream" speech. While many speakers applauded the Kennedy Administration for the efforts it had made toward obtaining new, more effective civil rights legislation protecting the right to vote and outlawing segregation, John Lewis of SNCC took the Administration to task for how little it had done to protect southern blacks and civil rights workers under attack in the Deep South.
After the march, King and other civil rights leaders met with President Kennedy at the White House. While the Kennedy Administration appeared to be sincerely committed to passing the bill, it was not clear that it had the votes to do it. But when President Kennedy was assassinated on November 22, 1963, the new President Lyndon Johnson decided to use his influence in Congress to bring about much of Kennedy's legislative agenda.
In the summer of 1964, COFO brought nearly 1,000 activists to Mississippi — most of them white college students — to join with local black activists to register voters, teach in "Freedom Schools," and organize the Mississippi Freedom Democratic Party (MFDP).
Many of Mississippi's white residents deeply resented the outsiders and attempts to change their society. State and local governments, police, the White Citizens' Council and the Ku Klux Klan used arrests, beatings, arson, murder, spying, firing, evictions, and other forms of intimidation and harassment to oppose the project and prevent blacks from registering to vote or achieving social equality.
Three civil rights workers, James Chaney, a young black Mississippian and plasterer's apprentice; and two Jewish activists, Andrew Goodman, a Queens College anthropology student; and Michael Schwerner, a CORE organizer from Manhattan's Lower East Side, were murdered by members of the Klan, some of them members of the Neshoba County sheriff's department, on June 21, 1964 (see Mississippi civil rights workers murders for details).
From June to August, Freedom Summer activists worked in 38 local projects scattered across the state, with the largest number concentrated in the Mississippi Delta region. At least 30 Freedom Schools with close to 3,500 students were established, and 28 community centers set up.
Over the course of the Summer Project, some 17,000 Mississippi blacks attempted to become registered voters in defiance of all the forces of white supremacy arrayed against them — only 1,600 (less than 10%) succeeded. But more than 80,000 joined the MFDP.
Though Freedom Summer failed to register many voters, it had a significant effect on the course of the Civil Rights Movement. It helped break down the decades of isolation and repression that were the foundation of the Jim Crow system. Before Freedom Summer, the national news media had paid little attention to the persecution of black voters in the Deep South and the dangers endured by black civil rights workers. When the lives of affluent northern white students were threatened and taken, the full attention of the media spotlight turned on the state. The apparent disparity between the value which the media placed on the lives of whites and blacks embittered many black activists. Perhaps the most significant effect of Freedom Summer was on the volunteers themselves, almost all of whom — black and white — still consider it one of the defining periods of their lives.
Blacks in Mississippi had been disfranchised by statutory and constitutional changes since the late 1800s. In 1963 COFO held a Freedom Vote in Mississippi to demonstrate the desire of black Mississippians to vote. More than 80,000 people registered and voted in the mock election which pitted an integrated slate of candidates from the "Freedom Party" against the official state Democratic Party candidates.
In 1964, organizers launched the Mississippi Freedom Democratic Party (MFDP) to challenge the all-white official party. When Mississippi voting registrars refused to recognize their candidates, they held their own primary. They selected Fannie Lou Hamer, Annie Devine, and Victoria Gray to run for Congress and a slate of delegates to represent Mississippi at the 1964 Democratic National Convention.
The presence of the Mississippi Freedom Democratic Party in Atlantic City, New Jersey, was inconvenient, however, for the convention organizers. They had planned a triumphant celebration of the Johnson Administration’s achievements in civil rights, rather than a fight over racism within the Democratic Party. All-white delegations from other Southern states threatened to walk out if the official slate from Mississippi was not seated. Johnson was worried about the inroads that Republican Barry Goldwater’s campaign was making in what previously had been the white Democratic stronghold of the "Solid South", as well as support which George Wallace had received in the North during the Democratic primaries.
Johnson could not, however, prevent the MFDP from taking its case to the Credentials Committee. There Fannie Lou Hamer testified eloquently about the beatings that she and others endured and the threats they faced for trying to register to vote. Turning to the television cameras, Hamer asked, "Is this America?"
Johnson offered the MFDP a "compromise" under which it would receive two non-voting, at-large seats, while the white delegation sent by the official Democratic Party would retain its seats. The MFDP angrily rejected the "compromise."
The MFDP kept up its agitation within the convention, even after it was denied official recognition. When all but three of the "regular" Mississippi delegates left because they refused to pledge allegiance to the party, the MFDP delegates borrowed passes from sympathetic delegates and took the seats vacated by the official Mississippi delegates. They were then removed by the national party. When they returned the next day to find that convention organizers had removed the empty seats that had been there the day before, they stayed to sing freedom songs.
The 1964 Democratic Party convention disillusioned many within the MFDP and the Civil Rights Movement, but it did not destroy the MFDP itself. The MFDP became more radical after Atlantic City. It invited Malcolm X, of the Nation of Islam, to speak at one of its conventions and opposed the war in Vietnam.
On December 10, 1964, Dr. Martin Luther King, Jr. was awarded the Nobel Peace Prize, the youngest man to receive the award; he was 35 years of age.
SNCC had undertaken an ambitious voter registration program in Selma, Alabama, in 1963, but by 1965 had made little headway in the face of opposition from Selma's sheriff, Jim Clark. After local residents asked the SCLC for assistance, King came to Selma to lead several marches, at which he was arrested along with 250 other demonstrators. The marchers continued to meet violent resistance from police. Jimmie Lee Jackson, a resident of nearby Marion, was killed by police at a later march in February.
On March 7,1965, Hosea Williams of the SCLC and John Lewis of SNCC led a march of 600 people to walk the 54 miles (87 km) from Selma to the state capital in Montgomery. Only six blocks into the march, however, at the Edmund Pettus Bridge, state troopers and local law enforcement, some mounted on horseback, attacked the peaceful demonstrators with billy clubs, tear gas, rubber tubes wrapped in barbed wire and bull whips. They drove the marchers back into Selma. John Lewis was knocked unconscious and dragged to safety. At least 16 other marchers were hospitalized. Among those gassed and beaten was Amelia Boynton Robinson, who was at the center of civil rights activity at the time.
The national broadcast of the footage of lawmen attacking unresisting marchers seeking the right to vote provoked a national response as had scenes from Birmingham two years earlier. The marchers were able to obtain a court order permitting them to make the march without incident two weeks later.
After a second march to the site of Bloody Sunday on March 9, however, local whites murdered another voting rights supporter, Rev. James Reeb. He died in a Birmingham hospital March 11. On March 25, four Klansmen shot and killed Detroit homemaker Viola Liuzzo as she drove marchers back to Selma at night after the successfully completed march to Montgomery.
Eight days after the first march, Johnson delivered a televised address to support of the voting rights bill he had sent to Congress. In it he stated:
But even if we pass this bill, the battle will not be over. What happened in Selma is part of a far larger movement which reaches into every section and state of America. It is the effort of American Negroes to secure for themselves the full blessings of American life.
Their cause must be our cause too. Because it is not just Negroes, but really it is all of us, who must overcome the crippling legacy of bigotry and injustice. And we shall overcome.
Johnson signed the Voting Rights Act of 1965 on August 6. The 1965 act suspended poll taxes, literacy tests and other subjective voter tests. It authorized Federal supervision of voter registration in states and individual voting districts where such tests were being used. African Americans who had been barred from registering to vote finally had an alternative to taking suits to local or state courts. If voting discrimination occurred, the 1965 act authorized the Attorney General of the United States to send Federal examiners to replace local registrars. Johnson reportedly told his concern to associates that signing the bill had lost the white South for the Democratic Party for the foreseeable future.
The act had an immediate and positive impact for African Americans. Within months of its passage, 250,000, one quarter of a million, new black voters had been registered, one third of them by federal examiners. Within four years, voter registration in the South had more than doubled. In 1965, Mississippi had the highest black voter turnout—74%—and led the nation in the number of black public officials elected. In 1969, Tennessee had a 92.1% turnout; Arkansas, 77.9%; and Texas, 73.1%.
Several whites who had opposed the Voting Rights Act paid a quick price. In 1966 Sheriff Jim Clark of Alabama, infamous for using cattle prods against civil rights marchers, was up for reelection. Although he took off the notorious "Never" pin on his uniform, he was defeated. At the election, Clark lost as Blacks voted to get him out of office.
Blacks' regaining the power to vote changed the political landscape of the South. When Congress passed the Voting Rights Act, only about 100 African Americans held elective office, all in northern states of the U.S. By 1989, there were more than 7,200 African Americans in office, including more than 4,800 in the South. Nearly every Black Belt county (where populations were majority black) in Alabama had a black sheriff. Southern blacks held top positions within city, county, and state governments.
Atlanta elected a black mayor, Andrew Young, as did Jackson, Mississippi—Harvey Johnson—and New Orleans, with Ernest Morial. Black politicians on the national level included Barbara Jordan, who represented Texas in Congress, and Andrew Young was appointed United States Ambassador to the United Nations during the Carter administration. Julian Bond was elected to the Georgia Legislature in 1965, although political reaction to his public opposition to U.S. involvement in Vietnam prevented him from taking his seat until 1967. John Lewis represents Georgia's 5th congressional district in the United States House of Representatives, where he has served since 1987.
Rev. James Lawson invited King to Memphis, Tennessee, in March 1968 to support a strike by sanitation workers. They had launched a campaign for union representation after two workers were accidentally killed on the job.
A day after delivering his famous "Mountaintop" sermon at Lawson's church, King was assassinated on April 4, 1968. Riots broke out in more than 110 cities across the United States in the days that followed, notably in Chicago, Baltimore, and in Washington, D.C. The damage done in many cities destroyed black businesses. It would take more than a generation for those areas to recover. Some still have not.
Rev. Ralph Abernathy succeeded King as the head of the SCLC and attempted to carry forth King's plan for a Poor People's March. It was to unite blacks and whites to campaign for fundamental changes in American society and economic structure. The march went forward under Abernathy's plainspoken leadership but did not achieve its goals.
During the years preceding his election to the presidency, John F. Kennedy's record of voting on issues of racial discrimination had been scant. Kennedy openly confessed to his closest advisors that during the first months of his presidency, his knowledge of the civil rights movement was "lacking".
For the first two years of the Kennedy Administration, attitudes to both the President and Attorney-General, Robert F. Kennedy, were mixed. Many viewed the Administration with suspicion. A well of historical cynicism toward white liberal politics had left a sense of uneasy disdain by African-Americans toward any white politician who claimed to share their concerns for freedom. Still, many had a strong sense that in the Kennedys there was a new age of political dialogue beginning.
The naiveté of the Kennedy brothers was demonstrated in Robert Kennedy's declaration in 1962 that, "[T]he Irish were not wanted here. Now an Irish Catholic is President of the United States. There is no question about it, in the next forty years a Negro can achieve the same position."
Although observers frequently assert the phrase "The Kennedy Administration" or even, "President Kennedy" when discussing the legislative and executive support of the Civil Rights movement, between 1960 and 1963, many of the initiatives were actually the result of Robert Kennedy's passion. Through his rapid education in the realities of racism, Robert Kennedy underwent a thorough conversion of purpose as Attorney-General. Asked in an interview in May 1962, "What do you see as the big problem ahead for you, is it Crime or Internal Security?" Robert Kennedy replied, "Civil Rights. The President came to share his brother's sense of urgency on the matters to such an extent that it was at the Attorney-General's insistence that he made his famous address to the nation..
When a white mob attacked and burned the First Baptist Church in Montgomery, Alabama, where King held out with protesters, the Attorney-General telephoned King to ask him not to leave the building until the U.S. Marshals and National Guard could secure the area. King proceeded to berate Kennedy for "allowing the situation to continue". King later publicly thanked Robert Kennedy's commanding the force to break up an attack which might otherwise have ended King's life.
The relationship between the two men underwent change from mutual suspicion to one of shared aspirations. For Dr King, Robert Kennedy initially represented the 'softly softly' approach that in former years had disabled the movement of blacks against oppression in the U.S. For Robert Kennedy, King initially represented what he then considered an unrealistic militancy. Some white liberals regarded the militancy itself as the cause of so little governmental progress.
King regarded much of the efforts of the Kennedys as an attempt to control the movement and siphon off its energies. Yet he came to find the efforts of the brothers to be crucial. It was at Robert Kennedy's constant insistence, through conversations with King and others, that King came to recognize the fundamental nature of electoral reform and suffrage—the need for black Americans to actively engage not only protest but political dialogue at the highest levels. In time the President gained King's respect and trust, via the frank dialogue and efforts of the Attorney-General. Robert Kennedy became very much his brother's key advisor on matters of racial equality. The President regarded the issue of civil rights to be a function of the Attorney-General's office.
With a very slim majority in Congress, the President's ability to press ahead with legislation relied considerably on a balancing game with the Senators and Congressmen of the South. Indeed, without the support of Vice-President Johnson, who had years of experience in Congress and longstanding relations there, many of the Attorney-General's programs would not have progressed at all.
By late 1962, frustration at the slow pace of political change was balanced by the movement's strong support for legislative initiatives: housing rights, administrative representation across all US Government departments, safe conditions at the ballot box, pressure on the courts to prosecute racist criminals. King remarked by the end of the year, "This administration has reached out more creatively than its predecessors to blaze new trails [in voting rights and government appointments]. Its vigorous young men have launched imaginative and bold forays and displayed a certain élan in the attention they give to civil rights issues."
From squaring off against Governor George Wallace, to "tearing into" Vice-President Johnson (for failing to desegregate areas of the administration), to threatening corrupt white Southern judges with disbarment, to desegregating interstate transport, Robert Kennedy came to be consumed by the Civil Rights movement. He carried it forward into his own bid for the presidency in 1968. On the night of Governor Wallace's capitulation, President Kennedy gave an address to the nation which marked the changing tide, an address which was to become a landmark for the change in political policy which ensued. In it President Kennedy spoke of the need to act decisively and to act now:
"We preach freedom around the world, and we mean it, and we cherish our freedom here at home, but are we to say to the world, and much more importantly, to each other that this is the land of the free except for the Negroes; that we have no second-class citizens except Negroes; that we have no class or caste system, no ghettoes, no master race except with respect to Negroes? Now the time has come for this Nation to fulfill its promise. The events in Birmingham and elsewhere have so increased the cries for equality that no city or State or legislative body can prudently choose to ignore them."
Assassination cut short the life and careers of both the Kennedy brothers and Dr. Martin Luther King, Jr. The essential groundwork of the Civil Rights Act 1964 had been initiated before John F. Kennedy was assassinated. The dire need for political and administrative reform had been driven home on Capitol Hill by the combined efforts of the Kennedy administration, Dr. King and other leaders, and President Lyndon Johnson.
In 1966, Robert Kennedy undertook a tour of South Africa in which he championed the cause of the anti-Apartheid movement. His tour gained international praise at a time when few politicians dared to entangle themselves in the politics of South Africa. Kennedy spoke out against the oppression of the native population. He was welcomed by the black population as though a visiting head of state. In an interview with LOOK Magazine he said:
"At the University of Natal in Durban, I was told the church to which most of the white population belongs teaches apartheid as a moral necessity. A questioner declared that few churches allow black Africans to pray with the white because the Bible says that is the way it should be, because God created Negroes to serve. "But suppose God is black", I replied. "What if we go to Heaven and we, all our lives, have treated the Negro as an inferior, and God is there, and we look up and He is not white? What then is our response?" There was no answer. Only silence."
Many in the Jewish-American community supported the Civil Rights Movement and Jews were more actively involved in the civil rights movement than any other white group in America. Many Jewish students worked in concert with African Americans for CORE, SCLC, and SNCC as full-time organizers and summer volunteers during the Civil Rights era. Jews made up roughly half of the white northern volunteers involved in the 1964 Mississippi Freedom Summer project and approximately half of the civil rights attorneys active in the South during the 1960s.
Jewish leaders were arrested with Rev. Dr. Martin Luther King, Jr. in St. Augustine, Florida, in 1964 after a challenge to racial segregation in public accommodations. Abraham Joshua Heschel, a writer, rabbi and professor of theology at the Jewish Theological Seminary of America in New York was outspoken on the subject of civil rights. He marched arm-in-arm with Dr. King in the 1965 March on Selma.
Brandeis University, the only nonsectarian Jewish-sponsored college university in the world, created the Transitional Year Program (TYP)in 1968, in part response to Rev. Dr. Martin Luther King's assassination. The faculty created it to renew the University's commitment to social justice. Recognizing Brandeis as a university with a commitment to academic excellence, these faculty members created a chance to disadvantaged students to participate in an empowering educational experience.
The program began by admitting 20 black males. As it developed, two groups have been given chances. The first group consists of students whose secondary schooling experiences and/or home communities may have lacked the resources to foster adequate preparation for success at elite colleges like Brandeis. For example, their high schools do not offer AP or honors courses nor high quality laboratory experiences. Students selected had to have excelled in the curricula offered by their schools.
The second group of students includes those whose life circumstances have created formidable challenges that required focus, energy, and skills that otherwise would have been devoted to academic pursuits. Some have served as heads of their households, others have worked full-time while attending high school full-time, and others have shown leadership in other ways.
King was becoming more estranged from the Johnson Administration. In 1965 he broke with it by calling for peace negotiations and a halt to the bombing of Vietnam. He moved further left in the following years, speaking of the need for economic justice and thoroughgoing changes in American society. He believed change was needed beyond the civil rights gained by the movement.
King's attempts to broaden the scope of the Civil Rights Movement were halting and largely unsuccessful, however. King made several efforts in 1965 to take the Movement north to address issues of employment and housing discrimination. His campaign in Chicago failed, as Chicago Mayor Richard J. Daley marginalized King's campaign by promising to "study" the city's problems. In 1966, white demonstrators holding "white power" signs in notoriously racist Cicero, a suburb of Chicago, threw stones at King and other marchers demonstrating against housing segregation. King was injured in this attack.
While the Ku Klux Klan was not as prevalent as it was in the South, other problems prevailed in northern cities. Urban black neighborhoods were among the poorest in most major cities. Unemployment was much higher than in white neighborhoods, and crime was frequent. African Americans rarely owned the stores or businesses where they lived and mostly worked menial or blue-collar jobs for a fraction of the pay that white co-workers received. African Americans often made only enough money to live in dilapidated tenements that were privately owned or poorly maintained public housing. They also attended schools that were often the worst academically in the city and that had very few white students. Worst of all, black neighborhoods were subject to police problems that white neighborhoods were not at all accustomed to dealing with.
The police forces in America were set up with the motto "To Protect and Serve." Rarely did this occur in any black neighborhoods. Rather, many blacks felt police only existed to "Patrol and Control." The racial makeup of the police departments, usually largely white, was a large factor. In black neighborhoods such as Harlem, the ratio was only one black officer for every six white officers, and in majority black cities such as Newark, New Jersey only 145 of the 1322 police officers were black. Police forces in Northern cities were largely composed of white ethnics, mainly Irish, Italian, and Eastern European officers who would routinely harass blacks with or without provocation.
One of the first major race riots took place in Harlem, New York, in the summer of 1964. A white Irish-American police officer, Thomas Gilligan, shot a 15-year-old black named James Powell for allegedly charging at him with a knife. In fact, Powell was unarmed. A group of black citizens demanded Gilligan's suspension. Hundreds of young demonstrators marched peacefully to the 67th Street police station on July 17, 1964, the day after Powell's death.
Gilligan was not suspended. Although this precinct had promoted the NYPD's first black station commander, neighborhood residents were tired of the inequalities. They looted and burned anything that was not black-owned in the neighborhood. This unrest spread to Bedford-Stuyvesant, a major black neighborhood in Brooklyn. That summer, rioting also broke out in Philadelphia, for similar reasons.
In the aftermath of the riots of July 1964, the federal government funded a pilot program called Project Uplift, in which thousands of young people in Harlem were given jobs during the summer of 1965. The project was inspired by a report generated by HARYOU called Youth in the Ghetto. HARYOU was given a major role in organizing the project, together with the National Urban League and nearly 100 smaller community organizations. Permanent jobs at living wages, however, were still out of reach of many young black men.
In 1965, President Lyndon B. Johnson signed the Voting Rights Act, but the new law had no immediate effect on living conditions for blacks. A few days after the act became law, a riot broke out in the South Central Los Angeles neighborhood of Watts. Like Harlem, Watts was an impoverished neighborhood with very high unemployment. Its residents had to endure patrols by a largely white police department. While arresting a young man for drunk driving, police officers argued with the suspect's mother before onlookers. The conflict triggered a massive destruction of property through six days of rioting. Thirty-four people were killed and property valued at about $30 million was destroyed, making the Watts riot one of the worst in American history.
With black militancy on the rise, increased acts of anger were now directed at the police. Black residents growing tired of police brutality continued to rebel. Some young people joined groups such as the Black Panthers, whose popularity was based in part on their reputation for confronting abusive police officers.
Riots occurred in 1966 and 1967 in cities such as Atlanta, San Francisco, Oakland, Baltimore, Seattle, Cleveland, Cincinnati, Columbus, Newark, Chicago, New York City (specifically in Brooklyn, Harlem and the Bronx), and worst of all in Detroit.
In Detroit, a comfortable black middle class had begun to develop among families of blacks who worked at well-paying jobs in the automotive industry. Blacks who had not moved upward were living in much worse conditions, subject to the same problems as blacks in Watts and Harlem. When white police officers shut down an illegal bar on a liquor raid and arrested a large group of patrons, furious residents rioted.
One significant effect of the Detroit riot was the acceleration of "white flight," the trend of white residents moving from inner-city neighborhoods to predominantly white suburbs. Detroit experienced "middle class black flight" as well. Cities such as Detroit, Newark, and Baltimore now have less than 40% white population as a result of these riots and other social changes. Changes in industry caused continued job losses, depopulation of middle classes, and concentrated poverty in such cities. They contain some of the worst living conditions for blacks anywhere in America.
As a result of the riots, President Johnson created the National Advisory Commission on Civil Disorders in 1967. The commission's final report called for major reforms in employment and public assistance for black communities. It warned that the United States was moving toward separate white and black societies.
Fresh rioting broke out in April 1968 after the assassination of Dr. Martin Luther King, Jr. Riots erupted in many major cities at once, including Chicago, Cleveland, Baltimore, and Washington, D.C., where damage was especially severe.
Affirmative Action altered the hiring process of more black police officers in every major city. Blacks make up a proportional majority of the police departments in cities such as Baltimore, Washington, New Orleans, Atlanta, Newark, and Detroit. Civil rights laws have reduced employment discrimination. The conditions that led to frequent rioting in the late 1960s have receded, but not all the problems have been solved.
With industrial and economic restructuring, tens of thousands of industrial jobs disappeared since the later 1950s from the old industrial cities. Some moved South, as has much population, and others out of the US altogether. Civil unrest broke out in Miami in 1980, in Los Angeles in 1992, and in Cincinnati in 2001.
At the same time King was finding himself at odds with factions of the Democratic Party, he was facing challenges from within the Civil Rights Movement to the two key tenets upon which the movement had been based: integration and non-violence. Black activists within SNCC and CORE had chafed for some time at the influence wielded by white advisors to civil rights organizations and the disproportionate attention that was given to the deaths of white civil rights workers while black workers' deaths often went virtually unnoticed. Stokely Carmichael, who became the leader of SNCC in 1966, was one of the earliest and most articulate spokespersons for what became known as the "Black Power" movement after he used that slogan, coined by activist and organizer Willie Ricks, in Greenwood, Mississippi on June 17, 1966.
In 1966 SNCC leader Stokely Carmichael began urging African American communities to confront the Ku Klux Klan armed and ready for battle. He felt it was the only way to ever rid the communities of the terror caused by the Klan.
Several people engaging in the Black Power movement started to gain more of a sense in black pride and identity as well. In gaining more of a sense of a cultural identity, several blacks demanded that whites no longer refer to them as "Negroes" but as "Afro-Americans." Up until the mid-1960s, blacks had dressed similarly to whites and combed their hair straight. As a part of gaining a unique identity, blacks started to wear loosely fit dashikis and had started to grow their hair out as a natural afro. The afro, sometimes nicknamed the "'fro," remained a popular black hairstyle until the late 1970s.
Black Power was made most public however by the Black Panther Party which founded in Oakland, California, in 1966. This group followed ideology stated by Malcolm X and the Nation of Islam using a "by-any-means necessary" approach to stopping inequality. They sought to rid African American neighborhoods of Police Brutality and had a ten-point plan amongst other things. Their dress code consisted of leather jackets, berets, light blue shirts, and an afro hairstyle. They are best remembered for setting up free breakfast programs, referring to police officers as "pigs", displaying shotguns and a black power fist, and often using the statement of "Power to the people."
Black Power was taken to another level inside of prison walls. In 1966, George Jackson formed the Black Guerilla Family in the California prison of San Quentin. The goal of this group was to overthrow the white-run government in America and the prison system in general. This group also preaches the general hatred of Whites and Jews everywhere. In 1970, this group displayed their ruthlessness after a white prison guard was found not guilty for shooting three black prisoners from the prison tower. The guard was found cut to pieces, and a message was sent throughout the whole prison of how serious the group was.
Also in 1968, Tommie Smith and John Carlos, while being awarded the gold and bronze medals, respectively, at the 1968 Summer Olympics, donned human rights badges and each raised a black-gloved Black Power salute during their podium ceremony. Incidentally, it was the suggestion of white silver medalist, Peter Norman of Australia, for Smith and Carlos to each wear one black glove. Smith and Carlos were immediately ejected from the games by the USOC, and later the IOC issued a permanent lifetime ban for the two. However, the Black Power movement had been given a stage on live, international television.
King was not comfortable with the "Black Power" slogan, which sounded too much like black nationalism to him. SNCC activists, in the meantime, began embracing the "right to self-defense" in response to attacks from white authorities, and booed King for continuing to advocate non-violence. When King was murdered in 1968, Stokely Carmichael stated that whites murdered the one person who would prevent rampant rioting and burning of major cities down and that blacks would burn every major city to the ground. In every major city from Boston to San Francisco, racial riots broke out in the black community following King's death and as a result, "White Flight" occurred from several cities leaving Blacks in a dilapidated and nearly unrepairable city.
In 1970 Civil Rights lawyer Roy Haber began taking statements from inmates, which eventually totalled fifty pages of details of murders, rapes, beatings and other abuses suffered by the inmates from 1969 to 1971 at Mississippi State Penitentiary. In a landmark case known as Gates v. Collier (1972) four inmates represented by Haber sued the superintendent of Parchman Farm for violating their rights under the United States Constitution. Federal Judge William C. Keady found in favor of the inmates, writing that Parchman Farm violated the civil rights of the inmates by inflicting cruel and unusual punishment. He ordered an immediate end to all unconstitutional conditions and practices. Racial segregation of inmates was abolished. And the trustee system, which allow certain inmates to have power and control over others, was also abolished.
The prison was renovated in 1972 after the scathing ruling by Judge Keady in which he wrote that the prison was an affront to "modern standards of decency." Among other reforms, the accommodations were made fit for human habitation and the system of "trusties" (in which lifers were armed with rifles and set to guard other inmates) was abolished.
In integrated correctional facilities in northern and western states, blacks represented a disproportionate amount of the prisoners and were often treated as second class citizens at the hands of white correctional officers. Blacks also represented a disproportionate number of death row inmates. As a result, Black Power found a ready constituency inside prison walls where gangs such as the Black Guerilla Family were formed as a way to redress the disproportionalities, organizing Black inmates to take militant action. Eldridge Cleaver's book Soul on Ice was written from his experiences in the California correctional system and further fueled black militancy.
National economic empowerment organizations:
Local civil rights organizations: | http://www.reference.com/browse/American+Civil+Rights+Movement+(1955%E2%80%931968) | 13 |
15 | The labor movement in the United States grew out of the need to protect the common interest of workers. For those in the industrial sector, organized labor unions fought for better wages, reasonable hours and safer working conditions. The labor movement led efforts to stop child labor, give health benefits and provide aid to workers who were injured or retired.
The origins of the labor movement lay in the formative years of the American nation, when a free wage-labor market emerged in the artisan trades late in the colonial period. The earliest recorded strike occurred in 1768 when New York journeymen tailors protested a wage reduction. The formation of the Federal Society of Journeymen Cordwainers (shoemakers) in Philadelphia in 1794 marks the beginning of sustained trade union organization among American workers. From that time on, local craft unions proliferated in the cities, publishing lists of "prices" for their work, defending their trades against diluted and cheap labor, and, increasingly, demanding a shorter workday. Thus a job-conscious orientation was quick to emerge, and in its wake there followed the key structural elements characterizing American trade unionism--first, beginning with the formation in 1827 of the Mechanics' Union of Trade Associations in Philadelphia, central labor bodies uniting craft unions within a single city, and then, with the creation of the International Typographical Union in 1852, national unions bringing together local unions of the same trade from across the United States and Canada (hence the frequent union designation "international"). Although the factory system was springing up during these years, industrial workers played little part in the early trade union development. In the nineteenth century, trade unionism was mainly a movement of skilled workers.
The early labor movement was, however, inspired by more than the immediate job interest of its craft members. It harbored a conception of the just society, deriving from the Ricardian labor theory of value and from the republican ideals of the American Revolution, which fostered social equality, celebrated honest labor, and relied on an independent, virtuous citizenship. The transforming economic changes of industrial capitalism ran counter to labor's vision. The result, as early labor leaders saw it, was to raise up "two distinct classes, the rich and the poor." Beginning with the workingmen's parties of the 1830s, the advocates of equal rights mounted a series of reform efforts that spanned the nineteenth century. Most notable were the National Labor Union, launched in 1866, and the Knights of Labor, which reached its zenith in the mid-1880s. On their face, these reform movements might have seemed at odds with trade unionism, aiming as they did at the cooperative commonwealth rather than a higher wage, appealing broadly to all "producers" rather than strictly to wageworkers, and eschewing the trade union reliance on the strike and boycott. But contemporaries saw no contradiction: trade unionism tended to the workers' immediate needs, labor reform to their higher hopes. The two were held to be strands of a single movement, rooted in a common working-class constituency and to some degree sharing a common leadership. But equally important, they were strands that had to be kept operationally separate and functionally distinct.
During the 1880s, that division fatally eroded. Despite its labor reform rhetoric, the Knights of Labor attracted large numbers of workers hoping to improve their immediate conditions. As the Knights carried on strikes and organized along industrial lines, the threatened national trade unions demanded that the group confine itself to its professed labor reform purposes; when it refused, they joined in December 1886 to form the American Federation of Labor (afl). The new federation marked a break with the past, for it denied to labor reform any further role in the struggles of American workers. In part, the assertion of trade union supremacy stemmed from an undeniable reality. As industrialism matured, labor reform lost its meaning--hence the confusion and ultimate failure of the Knights of Labor. Marxism taught Samuel Gompers and his fellow socialists that trade unionism was the indispensable instrument for preparing the working class for revolution. The founders of the afl translated this notion into the principle of "pure and simple" unionism: only by self-organization along occupational lines and by a concentration on job-conscious goals would the worker be "furnished with the weapons which shall secure his industrial emancipation."
That class formulation necessarily defined trade unionism as the movement of the entire working class. The afl asserted as a formal policy that it represented all workers, irrespective of skill, race, religion, nationality, or gender. But the national unions that had created the afl in fact comprised only the skilled trades. Almost at once, therefore, the trade union movement encountered a dilemma: how to square ideological aspirations against contrary institutional realities? As sweeping technological change began to undermine the craft system of production, some national unions did move toward an industrial structure, most notably in coal mining and the garment trades. But most craft unions either refused or, as in iron and steel and in meat packing, failed to organize the less skilled. And since skill lines tended to conform to racial, ethnic, and gender divisions, the trade union movement took on a racist and sexist coloration as well. For a short period, the afl resisted that tendency. But in 1895, unable to launch an interracial machinists' union of its own, the Federation reversed an earlier principled decision and chartered the whites-only International Association of Machinists. Formally or informally, the color bar thereafter spread throughout the trade union movement. In 1902, blacks made up scarcely 3 percent of total membership, most of them segregated in Jim Crow locals. In the case of women and eastern European immigrants, a similar devolution occurred--welcomed as equals in theory, excluded or segregated in practice. (Only the fate of Asian workers was unproblematic; their rights had never been asserted by the afl in the first place.)
Gompers justified the subordination of principle to organizational reality on the constitutional grounds of "trade autonomy," by which each national union was assured the right to regulate its own internal affairs. But the organizational dynamism of the labor movement was in fact located in the national unions. Only as they experienced inner change might the labor movement expand beyond the narrow limits--roughly 10 percent of the labor force--at which it stabilized before World War I.
In the political realm, the founding doctrine of pure-and-simple unionism meant an arm's-length relationship to the state and the least possible entanglement in partisan politics. A total separation had, of course, never been seriously contemplated; some objectives, such as immigration restriction, could be achieved only through state action, and the predecessor to the afl, the Federation of Organized Trades and Labor Unions (1881), had in fact been created to serve as labor's lobbying arm in Washington. Partly because of the lure of progressive labor legislation, even more in response to increasingly damaging court attacks on the trade unions, political activity quickened after 1900. With the enunciation of Labor's Bill of Grievances (1906), the afl laid down a challenge to the major parties. Henceforth it would campaign for its friends and seek the defeat of its enemies.
This nonpartisan entry into electoral politics, paradoxically, undercut the left-wing advocates of an independent working-class politics. That question had been repeatedly debated within the afl, first in 1890 over Socialist Labor party representation, then in 1893-1894 over an alliance with the Populist party, and after 1901 over affiliation with the Socialist party of America. Although Gompers prevailed each time, he never found it easy. Now, as labor's leverage with the major parties began to pay off, Gompers had an effective answer to his critics on the left: the labor movement could not afford to waste its political capital on socialist parties or independent politics. When that nonpartisan strategy failed, as it did in the reaction following World War I, an independent political strategy took hold, first through the robust campaigning of the Conference for Progressive Political Action in 1922, and in 1924 through labor's endorsement of Robert La Follette on the Progressive ticket. By then, however, the Republican administration was moderating its hard line, evident especially in Herbert Hoover's efforts to resolve the simmering crises in mining and on the railroads. In response, the trade unions abandoned the Progressive party, retreated to nonpartisanship, and, as their power waned, lapsed into inactivity.
It took the Great Depression to knock the labor movement off dead center. The discontent of industrial workers, combined with New Deal collective bargaining legislation, at last brought the great mass production industries within striking distance. When the craft unions stymied the afl's organizing efforts, John L. Lewis of the United Mine Workers and his followers broke away in 1935 and formed the Committee for Industrial Organization (cio), which crucially aided the emerging unions in auto, rubber, steel, and other basic industries. In 1938 the cio was formally established as the Congress of Industrial Organizations. By the end of World War II, more than 12 million workers belonged to unions, and collective bargaining had taken hold throughout the industrial economy.
In politics, its enhanced power led the union movement not to a new departure but to a variant on the policy of nonpartisanship. As far back as the Progressive Era, organized labor had been drifting toward the Democratic party, partly because of the latter's greater programmatic appeal, perhaps even more because of its ethnocultural basis of support within an increasingly "new" immigrant working class. With the coming of Roosevelt's New Deal, this incipient alliance solidified, and from 1936 onward the Democratic party could count on--and came to rely on--the campaigning resources of the labor movement. That this alliance partook of the nonpartisan logic of Gompers's authorship--too much was at stake for organized labor to waste its political capital on third parties--became clear in the unsettled period of the early cold war. Not only did the cio oppose the Progressive party of 1948, but it expelled the left-wing unions that broke ranks and supported Henry Wallace for the presidency that year.
The formation of the afl--cio in 1955 visibly testified to the powerful continuities persisting through the age of industrial unionism. Above all, the central purpose remained what it had always been--to advance the economic and job interests of the union membership. Collective bargaining performed impressively after World War II, more than tripling weekly earnings in manufacturing between 1945 and 1970, gaining for union workers an unprecedented measure of security against old age, illness, and unemployment, and, through contractual protections, greatly strengthening their right to fair treatment at the workplace. But if the benefits were greater and if they went to more people, the basic job-conscious thrust remained intact. Organized labor was still a sectional movement, covering at most only a third of America's wage earners and inaccessible to those cut off in the low-wage secondary labor market.
Nothing better captures the uneasy amalgam of old and new in the postwar labor movement than the treatment of minorities and women who flocked in, initially from the mass production industries, but after 1960 from the public and service sectors as well. Labor's historic commitment to racial and gender equality was thereby much strengthened, but not to the point of challenging the status quo within the labor movement itself. Thus the leadership structure remained largely closed to minorities--as did the skilled jobs that were historically the preserve of white male workers--notoriously so in the construction trades but in the industrial unions as well. Yet the afl--cio played a crucial role in the battle for civil rights legislation in 1964-1965. That this legislation might be directed against discriminatory trade union practices was anticipated (and quietly welcomed) by the more progressive labor leaders. But more significant was the meaning they found in championing this kind of reform: the chance to act on the broad ideals of the labor movement. And, so motivated, they deployed labor's power with great effect in the achievement of John F. Kennedy's and Lyndon B. Johnson's domestic programs during the 1960s.
This was ultimately economic, not political power, however, and as organized labor's grip on the industrial sector began to weaken, so did its political capability. From the early 1970s onward, new competitive forces swept through the heavily unionized industries, set off by deregulation in communications and transportation, by industrial restructuring, and by an unprecedented onslaught of foreign goods. As oligopolistic and regulated market structures broke down, nonunion competition spurted, concession bargaining became widespread, and plant closings decimated union memberships. The once-celebrated National Labor Relations Act increasingly hamstrung the labor movement; an all-out reform campaign to get the law amended failed in 1978. And with the election of Ronald Reagan in 1980, there came to power an anti-union administration the likes of which had not been seen since the Harding era. Between 1975 and 1985, union membership fell by 5 million. In manufacturing, the unionized portion of the labor force dropped below 25 percent, while mining and construction, once labor's flagship industries, were decimated. Only in the public sector did the unions hold their own. By the end of the 1980s, less than 17 percent of American workers were organized, half the proportion of the early 1950s.
Swift to change the labor movement has never been. But if the new high-tech and service sectors seemed beyond its reach in 1989, so did the mass production industries in 1929. And, as compared to the old afl, organized labor is today much more diverse and broadly based: 40 percent of its members are white-collar workers, 30 percent are women, and the 14.5 percent who are black signify a greater representation than in the general population and a greater rate of participation than by white workers (22.6 percent compared to 16.3 percent). In the meantime, however, the movement's impotence has been felt. "The collapse of labor's legislative power facilitated the adoption of a set of economic policies highly beneficial to the corporate sector and to the affluent," wrote analyst Thomas B. Edsall in 1984. And, with collective bargaining in retreat, declining living standards of American wage-earning families set in for the first time since the Great Depression. The union movement became in the 1980s a diminished economic and political force, and, in the Age of Reagan, this made for a less socially just nation.
Foster R. Dulles and Melvyn Dubofsky, Labor in America: A History, 4th ed. (1984); Robert H. Zieger, American Workers, American Unions, 1920-1985 (1986).
How to Cite this Page:
Labor Movement. (2013). The History Channel website. Retrieved 5:17, May 18, 2013, from http://www.history.com/topics/labor.
Labor Movement. [Internet]. 2013. The History Channel website. Available from: http://www.history.com/topics/labor [Accessed 18 May 2013].
“Labor Movement.” 2013. The History Channel website. May 18 2013, 5:17 http://www.history.com/topics/labor.
“Labor Movement,” The History Channel website, 2013, http://www.history.com/topics/labor [accessed May 18, 2013].
“Labor Movement,” The History Channel website, http://www.history.com/topics/labor (accessed May 18, 2013).
Labor Movement [Internet]. The History Channel website; 2013 [cited 2013 May 18] Available from: http://www.history.com/topics/labor.
Labor Movement, http://www.history.com/topics/labor (last visited May 18, 2013).
Labor Movement. The History Channel website. 2013. Available at: http://www.history.com/topics/labor. Accessed May 18, 2013. | http://www.history.com/topics/print/labor | 13 |
74 | The GDR was more democratic, in the original and substantive sense of the word, than eastern Germany was before 1949 and than the former East Germany has become since the Berlin Wall was opened in 1989. It was also more democratic in this original sense than its neighbor, West Germany. While it played a role in the GDR’s eventual demise, the Berlin Wall was at the time a necessary defensive measure to protect a substantively democratic society from being undermined by a hostile neighbor bent on annexing it.
By Stephen Gowans
While East Germany (the German Democratic Republic, or GDR) wasn’t a ‘workers’ paradise’, it was in many respects a highly attractive model that was responsive to the basic needs of the mass of people and therefore was democratic in the substantive and original sense of the word. It offered generous pensions, guaranteed employment, equality of the sexes and substantial wage equality, free healthcare and education, and a growing array of other free and virtually free goods and services. It was poorer than its West German neighbor, the Federal Republic of Germany, or FRG, but it started at a lower level of economic development and was forced to bear the burden of indemnifying the Soviet Union for the massive losses Germany inflicted upon the USSR in World War II. These conditions were largely responsible for the less attractive aspects of life in the GDR: lower pay, longer hours, and fewer and poorer consumer goods compared to West Germany, and restrictions on travel to the West. When the Berlin Wall was open in 1989, a majority of the GDR’s citizens remained committed to the socialist basis of their society and wished to retain it. It wasn’t the country’s central planning and public ownership they rebelled against. These things produced what was best about the country. And while Cold War propaganda located East Germany well outside the ‘free world,’ political repression and the Stasi, the East German state security service, weren’t at the root of East Germans’ rebellion either. Ultimately, what the citizens of the GDR rebelled against was their comparative poverty. But this had nothing to do with socialism. East Germans were poorer than West Germans even before the Western powers divided Germany in the late 1940s, and remain poorer today. A capitalist East Germany, forced to start at a lower level of economic development and to disgorge war reparation payments to the USSR, would not have become the social welfare consumer society West Germany became and East Germans aspired after, but would have been at least as worse off as the GDR was, and probably much worse off, and without the socialist attractions of economic security and greater equality. Moreover, without the need to compete against an ideological rival, it’s doubtful the West German ruling class would have been under as much pressure to make concessions on wages and benefits. West Germans, then, owed many of their social welfare gains to the fact their neighbour to the east was socialist and not capitalist.
The Western powers divide Germany
While the distortions of Cold War history would lead one to believe it was the Soviets who divided Germany, the Western powers were the true authors of Germany’s division. The Allies agreed at the February 1945 Yalta conference that while Germany would be partitioned into French, British, US and Soviet occupation zones, the defeated Germany would be administered jointly. The hope of the Soviets, who had been invaded by Germany in both first and second world wars, was for a united, disarmed and neutral Germany. The Soviet’s goals were two-fold: First, Germany would be demilitarized, so that it could not launch a third war of aggression on the Soviet Union. Second, it would pay reparations for the massive damages it inflicted upon the USSR, calculated after the war to exceed $100 billion.
The Western powers, however, had other plans. The United States wanted to revive Germany economically to ensure it would be available as a rich market capable of absorbing US exports and capital investment. The United States had remained on the sidelines through a good part of the war, largely avoiding the damages that ruined its rivals, while at the same time acting as armourer to the Allies. At the end of the war, Britain, France, Germany, Japan and the USSR lay in ruins, while the US ruling class was bursting at the seams with war industry profits. The prospects for the post-war US economy, however, and hence for the industrialists, bankers and investors who dominated the country’s political decision-making, were dim unless new life could be breathed into collapsed foreign markets, which would be needed to absorb US exports and capital. An economically revived Germany was therefore an important part of the plan to secure the United States’ economic future. The idea of a Germany forced to pour out massive reparation payments to the USSR was intolerable to US policy makers: it would militate against the transformation of Germany into a sphere of profit-making for US capital, and would underwrite the rebuilding of an ideological competitor.
The United States intended to make post-war life as difficult as possible for the Soviet Union. There were a number of reasons for this, not least to prevent the USSR from becoming a model for other countries. Already, socialism had eliminated the United States’ access to markets and spheres of investment in one-sixth of the earth’s territory. The US ruling class didn’t want the USSR to provide inspiration and material aid to other countries to follow the same path. The lead role of communists in the resistance movements in Europe, “the success of the Soviet Union in defeating Nazi Germany,” and “the success of the Soviet Union in industrializing and modernizing,” had greatly raised the prestige of the USSR and enhanced the popularity of communism. Unless measures were taken to check the USSR’s growing popularity, socialism would continue to advance and the area open to US exports and investment would continue to contract. A Germany paying reparations to the Soviets was clearly at odds with the goals of reviving Germany and holding the Soviet Union in check. What’s more, while the Soviets wanted Germany to be permanently disarmed as a safeguard against German revanchism, the United States recognized that a militarized Germany under US domination could play a central role in undermining the USSR.
The division of Germany began in 1946, when the French decided to administer their zone separately. Soon, the Western powers merged their three zones into a single economic unit and announced they would no longer pay reparations to the Soviet Union. The burden would have to be borne by the Soviet occupation zone alone, which was smaller and less industrialized, and therefore less able to offer compensation.
In 1949, the informal division of Germany was formalized with the proclamation by the Western powers of a separate West German state, the FRG. The new state would be based on a constitution written by Washington and imposed on West Germans, without their ratification. (The GDR’s constitution, by contrast, was ratified by East Germans.) In 1954, West Germany was integrated into a new anti-Soviet military alliance, NATO, which, in its objectives, aped the earlier anti-Comintern pact of the Axis powers. The goal of the anti-Comintern pact was to oppose the Soviet Union and world communism. NATO, with a militarized West Germany, would take over from where the Axis left off.
The GDR was founded in 1949, only after the Western powers created the FRG. The Soviets had no interest in transforming the Soviet occupation zone into a separate state and complained bitterly about the Western powers’ division of Germany. Moscow wanted Germany to remain unified, but demilitarized and neutral and committed to paying war reparations to help the USSR get back on its feet. As late as 1954, the Soviets offered to dissolve the GDR in favour of free elections under international supervision, leading to the creation of a unified, unaligned, Germany. This, however, clashed with the Western powers’ plan of evading Germany’s responsibility for paying war reparations and of integrating West Germany into the new anti-Soviet, anti-communist military alliance. The proposal was, accordingly, rejected. George Kennan, the architect of the US policy of ‘containing’ (read undermining) the Soviet Union, remarked: “The trend of our thinking means that we do not want to see Germany reunified at this time, and that there are no conditions on which we would really find such a solution satisfactory.”
This placed the anti-fascist working class leadership of the GDR in a difficult position. The GDR comprised only one-third of German territory and had a population of 17 million. By comparison, the FRG comprised 63 million people and made up two-thirds of German territory. Less industrialized than the West, the new GDR started out poorer than its new capitalist rival. Per capita income was about 27 percent lower than in the West. Much of the militant section of the working class, which would have ardently supported a socialist state, had been liquidated by the Nazis. The burden of paying war reparations to the Soviets now had to be borne solely by the GDR. And West Germany ceaselessly harassed and sabotaged its neighbor, refusing to recognize it as a sovereign state, regarding it instead as its own territory temporarily under Soviet occupation. Repeatedly, West Germany proclaimed that its official policy was the annexation of its neighbor to the east.
The GDR’s leaders faced still other challenges. Compared to the West, East Germany suffered greater losses in the war. The US Army stripped the East of its scientists, technicians and technical know-how, kidnapping “thousands of managers, engineers, and all sorts of experts, as well as the best scientists – the brains of Germany’s East – from their factories, universities, and homes in Saxony and Thuringia in order to put them to work to the advantage of the Americans in the Western zone – or simply to have them waste away there.”
As Pauwels explains,
“During the last weeks of the hostilities the Americans themselves had occupied a considerable part of the Soviet zone, namely Thuringia and much of Saxony. When they pulled out at the end of June, 1945, they brought back to the West more than 10,000 railway cars full of the newest and best equipment, patents, blueprints, and so on from the firm Carl Zeiss in Jena and the local plants of other top enterprises such as Siemens, Telefunken, BMW, Krupp, Junkers, and IG-Farben. This East German war booty included plunder from the Nazi V-2 factory in Nordhausen: not only the rockets, but also technical documents with an estimated value of 400 to 500 million dollars, as well as approximately 1,200 captured German experts in rocket technology, one of whom being the notorious Wernher von Braun.”
The Allies agreed at Yalta that a post-war Germany would pay the Soviet Union $10 billion in compensation for the damages inflicted on the USSR during the war. This was a paltry sum compared to the more realistic estimate of $128 billion arrived at after the war. And yet the Soviets were short changed on even this meagre sum. The USSR received no more than $5.1 billion from the two German states, most of it from the GDR. The Soviets took $4.5 billion out of East Germany, carting away whole factories and railways, while the larger and richer FRG paid a miserable $600 million. The effect was the virtual deindustrialization of the East. In the end, the GDR would compensate both the United States (which suffered virtually no damage in World War II) through the loss of its scientists, technicians, blue-prints, patents and so on, and the Soviet Union (which suffered immense losses and deserved to be compensated), through the loss of its factories and railways. Moreover, the United States offered substantial aid to West Germany to help it rebuild, while the poorer Soviet Union, which had been devastated by the German invasion, lacked the resources to invest in the GDR. The West was rebuilt; the East stripped bare.
The GDR’s democratic achievements
Despite the many burdens it faced, the GDR managed to build a standard of living higher than that of the USSR “and that of millions of inhabitants of the American ghettoes, of countless poor white Americans, and of the population of most Third World countries that have been integrated willy-nilly with the international capitalist world system.”
Over 90 percent of the GDR’s productive assets were owned by the country’s citizens collectively, while in West Germany productive assets remained privately owned, concentrated in a few hands. Because the GDR’s economy was almost entirely publicly owned and the leadership was socialist, the economic surplus that people produced on the job went into a social fund to make the lives of everyone better rather than into the pockets of shareholders, bondholders, landowners and bankers. Out of the social fund came subsidies for food, clothing, rent, public transportation, as well as cultural, social and recreational activities. Wages weren’t as high as in the West, but a growing number of essential goods and services were free or virtually free. Rents, for example, were very low. As a consequence, there were no evictions and there was no homelessness. Education was free through university, and university students received stipends to cover living expenses. Healthcare was also free. Childcare was highly subsidized.
Differences in income levels were narrow, with higher wages paid to those working in particularly strenuous or dangerous occupations. Full gender equality was mandated by law and men and women were paid equally for the same work, long before gender equality was taken up as an issue in the West. What’s more, everyone had a right to a job. There was no unemployment in the GDR.
Rather than supporting systems of oppression and exploitation, as the advanced capitalist countries did in Africa, Latin America and Asia, the GDR assisted the people of the global South in their struggles against colonialism. Doctors were dispatched to Vietnam, Mozambique and Angola, and students from many Third World countries were trained and educated in the GDR at the GDR’s expense.
Even the Wall Street Journal recognized the GDR’s achievements. In February, 1989, just months before the opening of the Berlin Wall, the US ruling class’s principal daily newspaper announced that the GDR “has no debt problem. The 17 million East Germans earn 30 percent more than their next richest partners, the Czechoslovaks, and not much less than the English. East Germans build 32-bit mini-computers and a socialist ‘Walkman’ and the only queue in East Berlin forms at the opera.”
The downside was that compared to West Germany, wages were lower, hours of work were longer, and there were fewer consumer goods. Also, consumer goods tended to be inferior compared to those available in West Germany. And there were travel restrictions. Skilled workers were prevented from travelling to the West. But at the same time, vacations were subsidized, and East Germans could travel throughout the socialist bloc.
West Germany’s comparative wealth offered many advantages in its ideological battle with socialism. For one, the wealth differential could be attributed deceptively to the merits of capitalism versus socialism. East Germany was poorer, it was said, not because it unfairly bore the brunt of indemnifying the Soviets for their war losses, and not because it started on a lower rung, but because public ownership and central planning were inherently inefficient. The truth of the matter, however, was that East German socialism was more efficient than West German capitalism, producing faster growth rates, and was more responsive to the basic needs of its population. “East Germany’s national income grew in real terms about two percent faster annually that the West German economy between 1961 and 1989.”
The GDR was also less repressive politically. Following in the footsteps of Hitler, West Germany banned the Communist Party in the 1950s, and close tabs were kept by West Germany’s own ‘secret’ police on anyone openly expressing Marxist-Leninist views. Marxist-Leninists were barred from working in the public service and frequently lost private sector jobs owing to their political views. In the GDR, by contrast, those who expressed views at odds with the dominant Marxist-Leninist ideology did not lose their jobs, and were not cut off from the state’s generous social supports, though they too were monitored by the GDR’s ‘secret’ police. The penalty for dissenting from the dominant political ideology in the West (loss of income) was more severe than in the East.
The claim that the GDR’s socialism was less efficient than West Germany’s capitalism was predicated on the disparity in wealth between the two countries, but the roots of the disparity were external to the two countries’ respective systems of ownership, and the disparity existed prior to 1949 (at which point GDP per capita was about 43 percent higher in the West) and continued to exist after 1989 (when unemployment – once virtually eliminated — soared and remains today double what it is in the former West Germany.) Over the four decades of its existence, East German socialism attenuated the disparity, bringing the GDR closer to West Germany’s GDP per capita. Significantly, “real economic growth in all of Eastern Europe under communism was estimated to be higher than in Western Europe under capitalism (as well as higher than that in the USA) even in communism’s final decade (the 1980s).” After the opening of the Berlin Wall, with capitalism restored, “real economic output fell by over 30 percent in Eastern Europe as a whole in the 1990s.”
But the GDR’s faster growth rates from 1961 to 1989 tell only part of the story. It’s possible for GDP to grow rapidly, with few of the benefits reaching the bulk of the population. The United States spends more on healthcare as a percentage of its GDP than all other countries, but US life expectancy and infant mortality results are worse than in many other countries which spend less (but have more efficient public health insurance or socialized systems.) This is due to the reality that healthcare is unequally distributed in the United States, with the wealthy in a position to buy the best healthcare in the world while tens of millions of low-income US citizens can afford no or only inadequate healthcare. By contrast, in most advanced capitalist countries everyone has access to basic (though typically not comprehensive) healthcare. In socialist Cuba, comprehensive healthcare is free to all. What’s important, then, is not only how much wealth (or healthcare) a society creates, but also how a society’s wealth (or healthcare) is distributed. Wealth was far more evenly distributed in socialist countries than it was in capitalist countries. The mean Gini coefficient – a measure of income equality which runs from 0 (perfect equality) to 1 (perfect inequality) – was 0.24 for socialist countries in 1970 compared to 0.48 for capitalist countries.
Socialist countries also fared better at meeting their citizens’ basic needs. Compared to all capitalist countries, socialist countries had higher life expectancies, lower levels of infant mortality, and higher levels of literacy. However, the comparison of all socialist countries with all capitalist countries is unfair, because the group of capitalist countries comprises many more countries unable to effectively meet the basic needs of their populations owing to their low level of economic development. While capitalism is often associated with the world’s richest countries, the world’s poorest countries are also capitalist. Desperately poor Haiti, for example, is a capitalist country, while neighboring Cuba, richer and vastly more responsive to the needs of its citizens, is socialist. We would expect socialist countries to have done a better job at meeting the basic needs of their citizens, because they were richer, on average, than all capitalist countries together. But the conclusion still stands if socialist countries are compared with capitalist countries at the same level of economic development; that is, socialist countries did a better job of meeting their citizens’ basic needs compared to capitalist countries in the same income range. Even when comparing socialist countries to the richest capitalist countries, the socialist countries fared well, meeting their citizens’ basic needs as well as advanced capitalist countries met the needs of their citizens, despite the socialist countries’ lower level of economic development and fewer resources. In terms of meeting basic needs, then, socialism was more efficient: it did more with less.
Why were socialist countries, like the GDR, more efficient? First, socialist societies were committed to improving the living standards of the mass of people as their first aim (whereas capitalist countries are organized around profit-maximization as their principle goal – a goal linked to a minority that owns capital and land and derives its income from profits, rent and interest, that is, the exploitation of other people’s labor, rather than wages.) Secondly, the economic surplus the citizens of socialist countries produced was channelled into making life better for everyone (whereas in capitalist countries the economic surplus goes straight to shareholders, bondholders, landowners and bankers.) This made socialism more democratic than capitalism in three ways:
• It was more equal. (Capitalism, by contrast, produces inequality.)
• It worked toward improving as much as possible the lot of the classes which have no other means of existence but the labor of their hands and which comprise the vast majority of people. (Capitalist societies, on the other hand, defend and promote the interests of the minority that owns capital.)
• It guaranteed economic and social rights. (By comparison, capitalist societies emphasize political and civil liberties, i.e., protections against the majority using its greater numbers to encroach upon the privileges of the minority that owns and controls the economy.)
As will be discussed below, even when it came to political (as distinct from social and economic) democracy, the differences between East and West Germany were more illusory than real.
Stanching the outward migration of skilled workers
Despite the many advantages the GDR offered, it remained less affluent throughout its four decades compared to its capitalist neighbor to the west. For many “the lure of higher salaries and business opportunities in the West remained strong.” As a result, in its first decade, East Germany’s population shrunk by 10 percent. And while higher wages proved to be an irresistible temptation to East Germans who stressed personal aggrandizement over egalitarian values and social security, the FRG – keen to weaken the GDR – did much to sweeten the pot, offering economic inducements to skilled East Germans to move west. Working-age, but not retired, East Germans were offered interest-free loans, access to scarce apartments, immediate citizenship and compensation for property left behind, to relocate to the West.
By 1961, the East German government decided that defensive measures needed to be taken, otherwise its population would be depleted of people with important skills vital to building a prosperous society. East German citizens would be barred from entering West Germany without special permission, while West Germans would be prevented from freely entering the GDR. The latter restriction was needed to break up black market currency trading, and to inhibit espionage and sabotage carried out by West German agents. Walls, fences, minefields and other barriers were deployed along the length of the East’s border with the West. Many of the obstacles had existed for years, but until 1961, Berlin – partitioned between the West and East – remained free of physical barriers. The Berlin Wall – the GDR leadership’s solution to the problems of population depletion and Western sabotage and espionage — went up on August 13, 1961.
From 1961 to 1989, 756 East German escapees, an average of 30 per year, were either shot, drown, blown apart by mines or committed suicide after being captured. By comparison, hundreds of Mexicans die every year trying to escape poor Mexico into the far wealthier United States. Approximately 50,000 East Germans were captured trying to cross the border into West Germany from 1961 to 1989. Those who were caught served prison sentences of one year.
Over time, the GDR gradually relaxed its border controls, allowing working-age East Germans to visit the West if there was little risk of their not returning. While in the 1960s, only retirees over the age of 65 were permitted to travel to the West, by the 1980s, East Germans 50 years of age or older were allowed to cross the border. Those with relatives in the FRG were also allowed to visit. By 1987, close to 1.3 million working-age East Germans were permitted to travel to West Germany. Virtually all of them – over 99 percent – returned.
However, not all East Germans were granted the right to cross the border. In 1987, 300,000 requests were turned down. East Germans only received permission after being cleared by the GDR’s state security service, the Stasi. One of the effects of loosening the border restrictions was to swell the Stasi’s ranks, in order to handle the increase in applications for visits to the West.
Pauwels reminds us that,
“A hypothetical capitalist East Germany would likewise have also had to build a wall in order to prevent its population from seeking salvation in another, more prosperous Germany. Incidentally, people have fled and continue to flee, to richer countries also from poor capitalist countries. However, the numerous black refugees from extremely poor Haiti, for example, have never enjoyed the same kind of sympathy in the United States and elsewhere in the world that was bestowed so generously on refugees from the GDR during the Cold War…And should the Mexican government decide to build a ‘Berlin Wall’ along the Rio Grande in order to prevent their people from escaping to El Norte, Washington would certainly not condemn such an initiative the way it used to condemn the infamous East Berlin construction project.”
GDR sets standards for working class in FRG…and abroad
Despite its comparative poverty, the GDR furnished its citizens with generous pensions, free healthcare and education, inexpensive vacations, virtually free childcare and public transportation, and paid maternity leave, as fundamental rights. Even so, East Germany’s standard of living continued to lag behind that of the upper sections of the working class in the West. The comparative paucity and lower quality of consumer goods, and lower wages, were the product of a multitude of factors that conspired against the East German economy: its lower starting point; the need to invest in heavy industry at the expense of light industry; blockade and sanctions imposed by the West; the furnishing of aid to national liberation movements in the global South (which benefited the South more than it did the GDR. By comparison, aid flows from Western countries were designed to profit Western corporations, banks and investors.) What East Germany lacked in consumer goods and wages, it made up for in economic security. The regular economic crises of capitalist economies, with their rampant underemployment and joblessness, escalating poverty and growing homelessness, were absent in the GDR.
The greater security of life for East Germans presented a challenge to the advanced capitalist countries. Intent on demonstrating that capitalism was superior to socialism, governments and businesses in the West were forced to meet the standards set by the socialist countries to secure the hearts and minds of their own working class. Generous social insurance, provisions against lay-offs and representation on industrial councils were conceded to West German workers. But these were revocable concessions, not the inevitable rewards of capitalism.
East Germany’s robust social wage acted in much the same way strong unions do in forcing non-unionized plants to provide wages and benefits to match union standards. In the 1970s, Canada’s unionized Stelco steel mill at Hamilton, Ontario set the standard for the neighboring non-unionized Dofasco plant. What the Stelco workers won through collective bargaining, the non-unionized Dofasco workers received as a sop to keep the union out. But once the union goes, the motivation to pay union wages and provide union benefits disappears. Likewise, with the demise of East Germany and the socialist bloc, the need to provide a robust social safety net in the advanced capitalist countries to secure the loyalty of the working class no longer existed. Hence, the GDR not only furnished its own citizens with economic security, but indirectly forced the advanced capitalist countries to make concessions to their own workers. The demise of the GDR therefore not only hurt Ossis (East Germans), depriving them of economic security, but also hurt the working populations of the advanced capitalist countries, whose social programs were the spill-over product of capitalism’s ideological battle with socialism. It is no accident that the claw back of reforms and concessions granted by capitalist ruling classes during the Cold War has accelerated since the opening of the Berlin Wall.
The collapse of the GDR and the socialist bloc has proved injurious to the interests of Western working populations in another way, as well. From the Bolshevik Revolution in 1917 to the opening of the Berlin Wall in 1989, the territory available to capitalist exploitation steadily diminished. This limited the degree of wage competition within the capitalist global labor force to a degree that wouldn’t have been true had the forces of socialism and national liberation not steadily advanced through the twentieth century. The counter-revolution in the Soviet Union and Eastern Europe, and China’s opening to foreign investment, ushered in a rapid expansion worldwide in the number of people vying for jobs. North American and Western European workers didn’t compete for jobs with workers in Poland, Romania, Slovakia and Russia in 1970. They do today. The outcome of the rapid expansion of the pool of wage-labor worldwide for workers in the advanced capitalist countries has been a reduction in real wages and explosive growth in the number of permanent lay-offs as competition for jobs escalates. The demise of socialism in Eastern Europe (and China’s taking the capitalist road) has had very real – and unfavourable – consequences for working people in the West.
Since the opening of the Berlin Wall and the annexation of the GDR by the FRG in 1990, the former East Germany has been transformed from a rapidly industrializing country where everyone was guaranteed a job and access to a growing array of free and nearly free goods and services, to a de-industrialized backwater teeming with the unemployed where the population is being hollowed out by migration to the wealthier West. “The easterners,” a New York Times article remarked in 2005, “are notoriously unhappy.” Why? “Because life is less secure than it used to be under Communism.”
During the Cold War East Germans who risked their lives to breach the Berlin War were depicted as refugees from political repression. But their escape into the wealthier West had little to do with flight from political repression and much to do with being attracted to a higher standard of living. Today Ossis stream out of the East, just as they did before the Berlin Wall sprang up in 1961. More than one million people have migrated from the former East Germany to the West since 1989. But these days, economic migrants aren’t swapping modestly-paid jobs, longer hours and fewer and poorer consumer goods in the East for higher paying jobs, shorter hours and more and better consumer goods in the West. They’re leaving because they can’t find work. The real unemployment rate, taking into account workers forced into early retirement or into the holding pattern of job re-training schemes, reaches as high as 50 percent in some parts of the former East Germany. And the official unemployment rate is twice as high in the East as it is in the West. Erich Quaschnuk, a retired railroad worker, acknowledges that “the joy back then when the Berlin Wall fell was real,” but quickly adds, “the promise of blooming landscapes never appeared.”
Twenty years after the opening of the Berlin Wall, one-half of people living in the former East Germany say there was more good than bad about the GDR, and that life was happier and better. Some Ossis go so far as to say they “were driven out of paradise when the Wall came down” while others thank God they were able to live in the GDR. Still others describe the unified Germany as a “slave state” and a “dictatorship of capital,” and reject Germany for “being too capitalist or dictatorial, and certainly not democratic.”
Much as the GDR was faulted for being less democratic politically than the FRG, the FRG’s claim to being more democratic politically is shaky at best.
“East Germany…permitted voters to cast secret ballots and always had more than one candidate for each government position. Although election results typically resulted in over 99 percent of all votes being for candidates of parties that did not favour revolutionary changes in the East German system (just as West German election results generally resulted in over 99 percent of the people voting for non-revolutionary West German capitalist parties), it was always possible to change the East German system from within the established political parties (including the communist party), as those parties were open to all and encouraged participation in the political process. The ability to change the East German system from within is best illustrated by the East German leader who opened up the Berlin Wall and initiated many political reforms in less than two months in power.”
West Germany outlawed many anti-capitalist political parties and organizations, including, in the 1950s, the popular Communist Party, as Hitler did in the 1930s. (On the other side of the Berlin Wall, no party that aimed to reverse socialism or withdraw from the Warsaw Pact was allowed.) The West German parties tended to be pro-capitalist, and those that weren’t didn’t have access to the resources the wealthy patrons of the mainstream political parties could provide to run the high-profile marketing campaigns that were needed to command significant support in elections. What’s more, West Germans were dissuaded from voting for anti-establishment parties, for fear the victory of a party with a socialist platform would be met by capital strike or flight, and therefore the loss of their jobs. The overwhelming support for pro-capitalist parties, then, rested on two foundations: The pro-capitalist parties uniquely commanded the resources to build messages with mass appeal and which could be broadcast with sufficient volume to reach a mass audience, and the threat of capital strike and capital flight disciplined working class voters to support pro-business parties.
No one would have built a Berlin Wall if they didn’t have to. But in 1961, with the GDR being drained of its working population by a West Germany that had skipped out on its obligations to indemnify the Soviet Union for the losses the Nazis had inflicted upon it in World War II, there were few options, apart from surrender. The Berlin Wall was, without question, regrettable, but it was at the same time a necessary defensive measure. If the anti-fascist, working class leadership of the GDR was to have any hope of building a mass society that was responsive to the basic needs of the working class and which channelled its economic surplus into improving the living conditions and economic security of all, drastic measures would have to be taken; otherwise, the experiment in German democracy — that of building a state that operated on behalf of the mass of people, rather than a minority of shareholders, bondholders, landowners and bankers — would have to be abandoned. And yet, by the history of drastic measures, this was hardly drastic. Wars weren’t waged, populations weren’t expelled, mass executions weren’t carried out. Instead, people of working-age were prevented from resettling in the West.
The abridgment of mobility rights was hardly unique to revolutionary situations. While the needs of Cold War propaganda pressed Washington to howl indignantly over the GDR’s measures to stanch the flow of its working-age population to the West, the restriction of mobility rights had not been unknown in the United States’ own revolution, where the ‘freedoms’ of dissidents and people of uncertain loyalty had been freely revoked. “During the American Revolution…those who wished to cross into British territory had to obtain a pass from the various State governments or military commanders. Generally, a pass was granted only to individuals of known and acceptable ‘character and views’ and after their promise neither to inform or otherwise to act to the prejudice of the United States. Passes, even for those whose loyalty was guaranteed, were generally difficult to acquire.”
Was the GDR worth defending? Is its demise to be regretted? Unquestionably. The GDR was a mass society that channelled the surplus of the labor of all into the betterment of the conditions of all, rather than into the pockets of the few. It offered its citizens an expanding array of free and virtually free goods and services, was more equal than capitalist countries, and met its citizens’ basic needs better than did capitalist countries at the same level of economic development. Indeed, it met basic needs as well as richer countries did, with fewer resources, in the same way Cuba today meets the basic healthcare needs of all its citizens better than the vastly wealthier United States meets (or rather fails to meet) those of tens of millions of its own citizens. And while the GDR was poorer than West Germany and many other advanced capitalist countries, its comparative poverty was not the consequence of the country’s public ownership and central planning, but of a lower starting point and the burden of having to help the Soviet Union rebuild after the massive devastation Germany inflicted upon it in World War II. Far from being inefficient, public ownership and central planning turned the eastern part of Germany into a rapidly industrializing country which grew faster economically than its West German neighbor and shared the benefits of its growth more evenly. In the East, the economy existed to serve the people. In the West, the people existed to serve the minority that owned and controlled the economy. Limiting mobility rights, just as they have been limited in other revolutions, was a small price to pay to build, not what anyone would be so naïve as to call a workers’ paradise, but what can be called a mass, or truly democratic, society, one which was responsiveness to the basic needs of the mass of people as its principal aim.
1. Austin Murphy, The Triumph of Evil: The Reality of the USA’s Cold War Victory, European Press Academic Publishing, 2000.
2. Henry Heller, The Cold War and the New Imperialism: A Global History, 1945-2005, Monthly Review Press, New York, 2006.
3. Jacques R. Pauwels, The Myth of the Good War: America in the Second World War, James Lorimer & Company Ltd., Toronto, 2002; R. Palme Dutt, The Internationale, Lawrence & Wishart Ltd., London, 1964.
4. Melvyn Leffler, “New perspectives on the Cold War: A conversation with Melvyn Leffler,” November, 1998. http://www.neh.gov/news/humanities/1998-11/leffler.html)
6. John Wight, “From WWII to the US empire,” The Morning Star (UK), October 11, 2009.
7. John Green, “Looking back at life in the GDR,” The Morning Star (UK), October 7, 2009.
8. Shirley Ceresto, “Socialism, capitalism, and inequality,” The Insurgent Sociologist, Vol. XI, No. 2, Spring, 1982.
9. Dutt; William Blum, Killing Hope: U.S. Military and CIA Interventions Since World War II, Common Courage Press, Maine, 1995.
18. The Wall Street Journal, February 22, 1989.
34. Fred Goldstein, Low-Wage Capitalism, World View Forum, New York, 2008.
36. The New York Times, December 6, 2005.
37. The Guardian (UK), November 15, 2006.
38. “Disappointed Eastern Germans turn right,” The Los Angeles Times, May 4, 2005.
39. Julia Bonstein, “Majority of Eastern Germans felt life better under communism,” Der Spiegel, July 3, 2009.
41. Albert Szymanski, Human Rights in the Soviet Union, Zed Book Ltd., London, 1984 | http://gowans.wordpress.com/category/west-germany/ | 13 |
18 | HISTORY OF MALAWI
History and Colonialism
The first inhabitants of present-day
Malawi were probably
to the San (Bushmen).
1st and 4th cent. A.D., Bantu-speaking peoples migrated
to present-day Malawi. A new wave of Bantu-speaking
peoples arrived around the 14th century, and they
soon coalesced into the Maravi kingdom (late 15th–late
18th century), centered in the Shire River valley.
In the 18th cent. the kingdom conquered portions
of modern Zimbabwe and Mozambique. However, shortly
thereafter it declined as a result of internal rivalries
and incursions by the Yao, who sold their Malawi
captives as slaves to Arab and Swahili merchants
living on the Indian Ocean coast. In the 1840s the
region was thrown into further turmoil by the arrival
from South Africa of the warlike Ngoni.
In 1859, David Livingstone, the Scots explorer,
visited Lake Nyasa and drew European attention to
the effects of the slave trade there; in 1873 two
Presbyterian missionary societies established bases
in the region. Missionary activity, the threat of
Portuguese annexation, and the influence of Cecil
Rhodes led Great Britain to send a consul to the
area in 1883 and to proclaim the Shire Highlands
Protectorate in 1889. In 1891 the British Central
African Protectorate (known from 1907 until 1964
as Nyasaland), which included most of present-day
Malawi, was established. During the 1890s, British
forces ended the slave trade in the protectorate.
At the same time, Europeans established coffee-growing
estates in the Shire region, worked by Africans.
In 1915 a small-scale revolt against British rule
was easily suppressed, but it was an inspiration
to other Africans intent on ending foreign domination.
In 1944 the protectorate's first political movement,
the moderate Nyasaland African Congress, was formed,
and in 1949 the government admitted the first Africans
to the legislative council. In 1953 the Federation
of Rhodesia and Nyasaland (linking Nyasaland, Northern
Rhodesia, and Southern Rhodesia) was formed, over
the strong opposition of Nyasaland's African population,
who feared that the more aggressively white-oriented
policies of Southern Rhodesia (see Zimbabwe) would
eventually be applied to them.
The Banda Regime
and Modern Malawi
In the mid-1950s the congress,
headed by H. B. M.
Kanyama Chiume, became
In 1958, Dr. Hastings Kamuzu Banda became the leader
of the movement, which was renamed the Malawi Congress
Party (MCP) in 1959. Banda organized protests against
British rule that led to the declaration of a state
of emergency in 1959–60. The Federation of
Rhodesia and Nyasaland was ended in 1963, and on
July 6, 1964, Nyasaland became independent as Malawi.
Banda led the country in the era of independence,
first as prime minister and, after Malawi became
a republic in 1966, as president; he was made president
for life in 1971. He quickly alienated other leaders
by governing autocratically, by allowing Europeans
to retain considerable influence within the country,
and by refusing to oppose white-minority rule in
South Africa. Banda crushed a revolt led by Chipembere
in 1965 and one led by Yatuta Chisiza in 1967.
Arguing that the country's economic well-being depended
on friendly relations with the white-run government
in South Africa, Banda established diplomatic ties
between Malawi and South Africa in 1967. In 1970,
Prime Minister B. J. Vorster of South Africa visited
Malawi, and in 1971 Banda became the first head of
an independent black African nation to visit South
Africa. This relationship drew heavy public criticism.
Nonetheless, Malawi enjoyed considerable economic
prosperity in the 1970s, attributable in large part
to foreign investment.
Throughout the decade, Malawi became a refuge for
antigovernment rebels from neighboring Mozambique,
causing tension between the two nations, as did the
influx (in the late 1980s) of more than 600,000 civil
war refugees, prompting Mozambique to close its border.
The border closure forced Malawi to use South African
ports at great expense. In the face of intense speculation
over Banda's successor, he began to eliminate powerful
officials through expulsions and possibly assassinations.
In 1992, Malawi
suffered the worst drought of the century. That
same year there were
against Banda's rule, and Western nations suspended
aid to the country. In a 1993 referendum Malawians
voted for an end to one-party rule, and parliament
passed legislation establishing a multiparty democracy
and abolishing the life presidency. In a free election
in 1994, Banda was defeated by Bakili Muluzi, his
former political protégé, who called
for a policy of national reconciliation. Muluzi formed
a coalition cabinet, with members from his own United
Democratic Front (UDF) and the rival Alliance for
Democracy (AFORD). Disillusioned with the coalition,
AFORD pulled out of the government in 1996. When
Muluzi was reelected in 1999, AFORD joined the MCP
in an unsuccessful attempt to prevent his inauguration.
President Bingu wa MUTHARIKA, elected in May 2004
after a failed attempt by the previous president
(Muluzi) to amend the constitution to permit another
term, struggled to assert his authority against
his predecessor, culminating in MUTHARIKA quitting
the political party on whose ticket he was elected
into office. MUTHARIKA subsequently started his
own party, the Democratic Progressive Party (DPP),
and has continued with a halting anti-corruption
campaign against abuses carried out under the previous
was excerpted from The
Columbia Electronic Encyclopedia)
INFORMATION ON MALAWI
Geography and Ethnicity
is long and narrow, and about 20% of its total
made up of Lake
Malawi. Several rivers flow into Lake
Malawi from the west, and the Shire River (a tributary
of the Zambezi) drains the lake in the south. Both
the lake and the Shire lie within the Great Rift
Valley. Much of the rest of the country is made
up of a plateau that averages 2,500 to 4,500 ft
(762–1,372 m) in height, but reaches elevations
of c.8,000 ft (2,440 m) in the north and almost
10,000 ft (3,050 m) in the south. Malawi is divided
into 24 administrative districts. In addition to
the capital and Blantyre, other cities include
Mzuzu and Zomba.
Almost all of the
country's inhabitants are Bantu-speakers and
about 90% are rural. The Tumbuka, Ngoni, and
Tonga (in the north) and the Chewa, Yao, Nguru,
and Nyanja (in the center and south) are the main
subgroups. About 80% of Malawi is Christian (mostly
Protestant and Roman Catholic), and roughly 13%
is Muslim; the rest follow traditional beliefs.
English and Chichewa are official languages; other
languages have regional importance.
of Malawi is estimated at 13.01 million (July
estimates for this country explicitly take into
account the effects of excess mortality due to
AIDS; this can result in lower life expectancy,
higher infant mortality and death rates, lower
population and growth rates, and changes in the
distribution of population by age and sex than
would otherwise be expected. It
is estimated that 14.2% of the adult population (900,000
Malawians) are living with HIV/AIDS (2003 est) and
that 84,000 per year die from the disease.
temperature and rainfall details in Lilongwe,
Malawi has two main seasons, the
dry and the wet. The wet season extends from November
to April. Rainfall can reach between 635mm and
3050mm, depending on altitude and position of the
area. From May to August, it is cool and dry. July
is mid-winter month. In September it is hot and
dry with October and November as the hottest months
with rains expected almost throughout the country.
Malawi is a multiparty democracy governed under
the constitution of 1995. The president, who is
both chief of state and head of government, is
popularly elected for a five-year term. The legislature
consists of a 177-seat national assembly whose
members are also elected by popular vote for five-year
Chief of state: President Bingu wa MUTHARIKA (since 24 May 2004);
note - the president is both the
chief of state and head of government.
Head of government: President Bingu wa MUTHARIKA
(since 24 May 2004); note - the president is both
the chief of state and head of government.
Cabinet: 46-member Cabinet named by the president.
Elections: president elected by popular vote
for a five-year term; election last held 20 May
2004 (next to be held in May 2009).
equal horizontal bands of black (top), red,
and green with a radiant, rising, red sun centered
in the black band.
Landlocked Malawi ranks among the world's least
developed countries. The economy is predominately
agricultural, with about 85% of the population
living in rural areas. Agriculture accounts for
36% of GDP and 80% of export revenues as of 2005.
The performance of the tobacco sector is key
to short-term growth as tobacco accounts for
over 53% of exports.
crops are corn, cotton, millet, rice, peanuts,
cassava, and potatoes. Tea, tobacco, sugarcane,
and tung oil are produced on large estates. With
the aid of foreign investment, Malawi has instituted
a variety of agricultural development programs.
Large numbers of poultry, goats, cattle, and pigs
There are small fishing and forest products industries.
Deforestation has become a problem as the growing
population uses more wood (the major energy source)
and woodland is cleared for farms. Practically
no minerals are extracted, but there are unexploited
deposits of bauxite, uranium, and coal. Malawi's
few manufactures are limited to basic goods, such
as processed food and beverages, lumber, textiles,
construction materials, and small consumer goods.
Leading imports are foodstuffs, petroleum products,
manufactured consumer goods, and transport equipment;
the principal exports are tobacco, tea, sugar,
coffee, peanuts, and forest products. The chief
trade partners are South Africa, Germany, the United
States, and Japan. Most of the country's foreign
trade is conducted via Salima, a port on Lake Nyasa,
which is connected by rail with the seaports of
Beira and Nacala in Mozambique. Malawi is a member
of the Southern African Development Community.
The economy depends
on substantial inflows of economic assistance
from the IMF, the World Bank, and individual
donor nations. In 2006, Malawi was approved for
relief under the Heavily Indebted Poor Countries
(HIPC) program. The government faces many challenges,
including developing a market economy, improving
educational facilities, facing up to environmental
problems, dealing with the rapidly growing problem
of HIV/AIDS, and satisfying foreign donors that
fiscal discipline is being tightened. In 2005,
President MUTHARIKA championed an anticorruption
campaign. Since 2005 President MUTHARIKA'S government
has exhibited improved financial discipline under
the guidance of Finance Minister Goodall GONDWE.
The currency is
Kwacha. Recent historical exchange rates are
as follows: Kwachas per US dollar - 138.7 (12/31/2008);
137.5 (12/31/2007); 143.69 (12/31/2006);
126.00 (12/31/2005); 105.76 (12/31/2004); 107.60
(12/31/2003); 87.27 (12/31/2002); 68.87
(12/31/2001); 47.49 (12/31/2000); 46.66 (12/31/1999);
45.25 (12/31/1998); 21.75 (12/31/1997).
with Tanzania over the boundary in Lake Nyasa (Lake Malawi) and
the meandering Songwe River remain dormant.
to Map of Malawi | http://www.eyesonafrica.net/african-safari-malawi/malawi-info.htm | 13 |
22 | By Dr Ananya Mandal, MD
Hearing loss of hearing impairment may be of two major types - conductive hearing loss and sensorineural hearing loss. A third type is a mixed type that has underlying symptoms of both these types of deafness of hearing loss. 1-6
Normal ear anatomy
The normal ear consists of a narrow canal that lets in the sound waves. This is called the external ear or the ear canal. These waves enter the ear canal and strike the ear drum.
The ear drum (called the tympanic membrane) is a membrane that vibrates as the sound waves hit it. These vibrations are passed to the three small bones (ossicles) inside the middle ear. These are called malleus, incus and stapes bones.
The ossicles move to amplify the vibrations and pass them on to the inner ear. The inner ear contains a shell shaped organ called the cochlea. Within the cochlea are tiny hair cells all along the inner walls. These move in response to the vibrations and send a signal through the auditory nerve to the brain.
Decibels hearing loss
The normal hearing range is 0-20 decibels (dB). Around 30 dB are for whispers, 50 dB for average home noises and 60 dB for conversational speech. Sounds like jet engine noises are over 140 dB and are painful.
Hearing loss is measured in decibels hearing loss (dB HL).
- 25-39 dB HL means mild hearing loss (cannot hear whispers)
- 40-69 dB HL means moderate (cannot hear conversational speech)
- 70-94 dB HL is severe (cannot hear shouting)
- more than 95 dB HL is profound (cannot hear sounds that would be painful for a hearing person)
Types of hearing loss
Types of hearing loss include conductive hearing loss, sensorineural hearing loss and mixed type.
Conductive hearing loss
In this the sound waves are unable to pass from the external ear into the inner ear resulting in a hearing loss. The most common reasons are due to:
- blockage of the ear canal by ear wax
- perforation of the ear drum
- build-up of fluid due to an ear infection called glue ear
Sensorineural hearing loss
This occurs where the auditory nerve and other nerves that carry the information from the sounds heard to the brain are damaged due to age or injury.
Hearing loss due to aging is called presbyacusis. After the age of 30 to 40, many people start to lose their hearing in tiny amounts. This increases with age and by 80 many people may have significant hearing impairment.
Presbuacusis occurs when the sensitive hair cells inside the cochlea gradually become damaged or die. The initial symptoms include loss of high-frequency sounds, such as female or children’s voices and difficulty in hearing consonants, making hearing and understanding speech difficult.
Ear injury is another common cause of hearing loss. This occurs due to damage caused by loud noises. The inner structures due to constant exposure to noise become damaged. Exposure to noise causes the hair cells inside the cochlea to be inflamed.
Some drugs may also cause damage to the nerves of the ears leading to sensorineural hearing loss. These include notable antibiotics like aminoglycosides (Gentamicin, Amikacin etc.)
Mixed type of hearing loss
When people get both types together, the condition is termed mixed type of hearing loss.
Reviewed by April Cashin-Garbutt, BA Hons (Cantab) | http://www.news-medical.net/health/Types-of-hearing-loss.aspx | 13 |
45 | What is Inflation?
Inflation is the term used to describe a rise of average prices through the economy. It means that money is losing its value.
The underlying cause is usually that too much money is available to purchase too few goods and services, or that demand in the economy is outpacing supply. In general, this situation occurs when an economy is so buoyant that there are widespread shortages of labour and materials. People can charge higher prices for the same goods or services.
Inflation can also be caused by a rise in the prices of imported commodities, such as oil. However, this sort of inflation is usually transient, and less crucial than the structural inflation caused by an over-supply of money.
Inflation can be very damaging for a number of reasons. First, people may be left worse off if prices rise faster than their incomes. Second, inflation can reduce the value of an investment if the returns prove insufficient to compensate them for inflation. Third, since bouts of inflation often go hand in hand with an overheated economy, they can accentuate boom-bust cycles in the economy.
Sustained inflation also has longer-term effects. If money is losing its value, businesses and investors are less likely to make long-term contracts. This discourages long-term investment in the nation’s productive capacity.
The flip-side of inflation is deflation. This occurs when average prices are falling, and can also result in various economic effects. For example, people will put off spending if they expect prices to fall. Sustained deflation can cause a rapid economic slow-down.
The Reserve Bank is as concerned about deflation as it is about inflation. In New Zealand, however, it has historically been more usual for prices to rise. As Figure 1 shows, there have been only brief periods of deflation in the past 150-odd years, and these have been associated with economic depressions. The graph also shows that, once the economy had become established, New Zealand did not have sustained high inflation until the 1970s and 1980s.
In the late 1980s the government gave the Reserve Bank responsibility for keeping inflation low and more stable than it had been. Statutory authority was provided in the Reserve Bank of New Zealand Act 1989, and the specifics were set out in a written agreement between the Governor of the Reserve Bank and the Minister of Finance. This ‘Policy Targets Agreement’ initially called for a reduction of inflation to 0–2 percent increase in the Consumers Price Index (CPI) by 1992. It has been revised several times since, and the current agreement, signed in May 2007, calls for inflation to be kept within 1 to 3 percent a year, on average over the medium term. This means that, as the graph shows, inflation can exceed the 1–3 percent target range in the short term. However, in the medium term it remains within that band, on average, and this means that the very high inflation rates of the 1960s and 1970s – which at times exceeded 18 percent per annum – do not occur.
The effect of this arrangement is clear from Graph 2, in which inflation has remained within a narrow band. The Bank controls inflation through an economic tool known as the Official Cash Rate, covered in a separate sheet.
There are various ways of measuring inflation. The one used in the Policy Targets Agreement is the CPI published by Statistics New Zealand. This records the change in the price of a weighted ‘basket’ of goods and services purchased by an ‘average’ New Zealand household. Statistics New Zealand weights and indexes the various items in the basket and forms the ‘all-groups’ index. The percentage change of this index is typically referred to as ‘CPI inflation’, and is usually expressed over both a quarterly and annual period.
The contents of the basket are defined by Statistics New Zealand, which periodically reviews and re-weights them, using data obtained from their annual Household Economic Survey. This is necessary because the basket of goods and services purchased by the average household will change over time.
The Reserve Bank has published an interactive inflation calculator on its website, at:http://www.rbnz.govt.nz/statistics/0135595.html
This calculator allows users to input a sum of money and compare its value, in terms of the CPI and other measures for pre-CPI years, between any two quarters from 1862, to the latest quarter for which CPI figures are available. | http://www.rbnz.govt.nz/monpol/about/0053316.html | 13 |
49 | Gold in California
Gold in California became highly concentrated there as the result of global forces operating over hundreds of millions of years. Volcanoes, tectonic plates and erosion all combined to concentrate billions of dollars worth of gold in the mountains of California. During the California Gold Rush, gold-seekers known as "Forty-Niners" retrieved this gold, at first using simple techniques, and then developing more sophisticated techniques, which spread around the world.
Scientists believe that over a span of at least 400 million years, gold that had been widely dispersed in the Earth’s crust became more concentrated by geologic actions into the gold-bearing regions of California. Only gold that is concentrated can be economically recovered. Some 400 million years ago, California lay at the bottom of a large sea; underwater volcanoes deposited lava and minerals (including gold) onto the sea floor; sometimes enough that islands were created. Between 400 million and 200 million years ago, geologic movement forced the sea floor and these volcanic islands and deposits eastwards, colliding with the North American continent, which was moving westwards.
Beginning about 200 million years ago, tectonic pressure forced the sea floor beneath the American continental mass. As it sank, or subducted, beneath today's California, the sea floor heated and melted into very large molten masses (magma). Being lighter and hotter than the ancient continental crust above it, this magma forced its way upward, cooling as it rose to become the granite rock found throughout the Sierra Nevada and other mountains in California today — such as the sheer rock walls and domes of Yosemite Valley. As the hot magma cooled, solidified, and came in contact with water, minerals with similar melting temperatures tended to concentrate themselves together. As it solidified, gold became concentrated within the magma, and during this cooling process, veins of gold formed within fields of quartz because of the similar melting temperatures of both.
As the Sierra Nevada and other mountains in California were forced upwards by the actions of tectonic plates, the solidified minerals and rocks were raised to the surface and exposed to rain, ice and snow. The surrounding rock then eroded and crumbled, and the exposed gold and other materials were carried downstream by water. Because gold is denser than almost all other minerals, this process further concentrated the gold as it sank, and pockets of gold gathered in quiet gravel beds along the sides of old rivers and streams.
The California mountains rose and shifted several times within the last fifty million years, and each time, old streambeds moved and were dried out, leaving the deposits of gold resting within the ancient gravel beds where the gold had been collecting. Newer rivers and streams then developed, and some of these cut through the old channels, carrying the gold into still larger concentrations.
The Forty-Niners of the California Gold Rush first focused their efforts on these deposits of gold, which had been gathered in the gravel beds by hundreds of millions of years of geologic action.
Gold recovery
The early Forty-Niners panned for gold in California’s rivers and streams, or used "cradles" and "rockers" or "long-toms," forms of placer mining. Modern estimates by the U.S. Geological Survey are that some 12 million ounces (373 t) of gold were removed in the first five years of the Gold Rush (worth approximately US$7.2 billion at November 2006 prices).
By 1853, the first hydraulic mining was used. In hydraulic mining, (which was invented in California) a powerful stream of water is directed at gold-bearing gravel beds; the gravel and gold then pass over sluices, with the gold settling to the bottom. By the mid-1880s, it is estimated that 11 million ounces (342 t) of gold (worth approximately US$6.6 billion at November 2006 prices) had been recovered via "hydraulicking."
The final stage to recover loose gold was to prospect for gold in the flat rivers of California’s Central Valley and other gold-bearing areas of California (such as Scott Valley in Siskiyou County). By the late 1890s, dredging technology (which was also invented in California) had become economical, and it is estimated that more than 20 million ounces (622 t) were recovered by dredging (worth approximately US$12 billion at November 2006 prices).
Gold-seekers also engaged in "hard-rock" mining, that is, extracting the gold directly from the rock that contained it (typically quartz) Once the gold-bearing rocks were brought to the surface, the rocks were crushed, and the gold was separated out (using moving water), or leached out, typically by using arsenic or mercury. Eventually, hard-rock mining wound up being the single largest source of gold produced in the Gold Country.
Geological after-effects
There were decades of minor earthquakes, more than at any other time in the historical record for Northern California, before the 1906 San Francisco earthquake. Previously interpreted as precursory activity to the 1906 earthquake, they have been found to have a strong seasonal pattern and due to large seasonal sediment loads in coastal bays that overlie faults as a result of mining of gold inland.
- Hill, Mary (1999). Gold: the California story. Berkeley and Los Angeles: University of California Press. p. 167.
- Hill, Mary (1999), p. 168.
- Hill, Mary (1999), pp. 168-69.
- Brands, H.W. (2003). The age of gold: the California Gold Rush and the new American dream. New York: Doubleday., pp. 195-196.
- Hill, Mary (1999), pp. 149-58. Similar forces created the granite domes and spires of Castle Crags in Shasta County.
- Hill, Mary (1999), pp. 174-78.
- Hill, Mary (1999), pp. 169-173.
- Hill, Mary (1999), pp. 94-100.
- Hill, Mary (1999), pp. 105-110.
- Images and detailed description of placer mining tools and techniques; image of a long tom
- Brands, H.W. (2002), pp. 198-200.
- Bancroft, Hubert Howe (1884-1890) History of California, vols. 18-24, The works of Hubert Howe Bancroft, complete text online, pp. 87-88.
- Mining History and Geology of the Mother Lode (accessed Oct. 16, 2006)
- Starr, Kevin (2005). California: a history. New York: The Modern Library., p. 89.
- * Rawls, James J. and Orsi, Richard J. (eds.) (1999). A golden state: mining and economic development in Gold Rush California (California History Sesquicentennial Series, 2). Berkeley and Los Angeles: Univ. of California Press. pp. 1991
- Rawls, James J. and Orsi, Richard (eds.) (1999), pp. 36-39
- Rawls, James J. and Orsi, Richard (eds.) (1999), pp. 39-43
- Seasonal Seismicity of Northern California Before the Great 1906 Earthquake, (Journal) Pure and Applied Geophysics, ISSN 0033-4553 (Print) 1420-9136 (Online), volume 159, Numbers 1-3 / January, 2002, Pages 7-62.
- Bancroft, Hubert Howe (1884–1890) History of California, vols. 18-24, The works of Hubert Howe Bancroft, complete text online
- Brands, H.W. (2003). The age of gold: the California Gold Rush and the new American dream. New York City: Doubleday. ISBN 0-385-72088-2.
- Hill, Mary (1999). Gold: the California story. Berkeley and Los Angeles: University of California Press. ISBN 0-520-21547-8.
- Rawls, James J. and Orsi, Richard J. (eds.) (1999). A golden state: mining and economic development in Gold Rush California (California History Sesquicentennial Series, 2). Berkeley and Los Angeles: University of California Press. ISBN 0-520-21771-3.
- Starr, Kevin (2005). California: a history. New York: The Modern Library. ISBN 0-679-64240-4. | http://en.wikipedia.org/wiki/Gold_in_California | 13 |
25 | - Starting a Business
- Inventing a Product
- Buying a Franchise
- Home Business
Basic Accounting Terms
Basic Accounting Terms
Here are some basic accounting terms to become familiar with. (Don’t get overwhelmed!) You don’t have to memorize these, but you need to be able to interpret them. Here are the most frequent accounting terms used.
Accounting - process of identifying, measuring, and reporting financial information of an entity.
Accounting Equation - Assets = Liabilities + Equity
Accounts Payable - money owed to creditors, vendors, etc.
Accounts Receivable - money owed to a business, i.e. credit sales.
Accrual Accounting - a method in which income is recorded when it is earned and expenses are recorded when they are incurred.
Asset - property with a cash value that is owned by a business or individual.
Balance Sheet - summary of a company's financial status, including assets, liabilities, and equity.
Bookkeeping - recording financial information.
Break-even – the amount of product that needs to be sold to create a profit of zero.
Cash-Basis Accounting - a method in which income and expenses are recorded when they are paid.
Chart of Accounts - a listing of a company's accounts and their corresponding numbers.
Cost Accounting - a type of accounting that focuses on recording, defining, and reporting costs associated with specific operating functions.
Credit - an account entry with a negative value for assets, and positive value for liabilities and equity.
Debit - an account entry with a positive value for assets, and negative value for liabilities and equity.
Depreciation - recognizing the decrease in the value of an asset due to age and use.
Double-Entry Bookkeeping - system of accounting in which every transaction has a corresponding positive and negative entry (debits and credits).
Equity - money owed to the owner or owners of a company, also known as "owner's equity".
Financial Accounting - accounting focused on reporting an entity's activities to an external party; ie: shareholders.
Financial Statement - a record containing the balance sheet and the income statement.
Fixed Asset - long-term tangible property; building, land, computers, etc.
General Ledger - a record of all financial transactions within an entity.
Income Statement - a summary of income and expenses.
Job Costing - system of tracking costs associated with a job or project (labor, equipment, etc) and comparing with forecasted costs.
Journal - a record where transactions are recorded, also known as an "account"
Liability - money owed to creditors, vendors, etc.
Liquid Asset - cash or other property that can be easily converted to cash.
Loan - money borrowed from a lender and usually repaid with interest.
Net Income - money remaining after all expenses and taxes have been paid.
Non-operating Income - income generated from non-recurring transactions; ie: sale of an old building or piece of equipment.
Note - a written agreement to repay borrowed money; sometimes used in place of "loan"
Operating Income - income generated from regular business operations.
Payroll - a list of employees and their wages.
Profit - see "net income"
Profit/Loss Statement - see "income statement"
Revenue - total income before expenses.
Single-Entry Bookkeeping - system of accounting in which transactions are entered into one account. | http://chic-ceo.com/basic-accounting-terms | 13 |
18 | England during the Price Revolution: cause & effect
The Golden Age of the English Peasant came to an end in the sixteenth century. The population rose sharply and so did prices, with prices something like five and a half times higher by the end of the Tudor century than the start. In many ways it could be argued that economic life in England during the Tudor period was more expensive and healthier that at any time since the Roman times. The revitalization of a pre-industrial economy is essentially a matter of recovering population, and this is something that England had failed to do on a large scale ever since the Black Death. After 1525 the population finally started to rise sharply - it was a mere 2.26 million in 1525, but it was 4.10 million by 16011. This steep rise in population was the result of a complex and in part unknowable process, of which three factors may be highlighted -
- Decline in disease. Population had stagnated in the fifteenth century largely due to disease in both town and countryside. England was generally healthier during the sixteenth century. By the reign of Elizabeth I the annual death rate was never more than 2.68% of the population.
- Higher fertility rates. During the fifteenth century many people were dying unmarried or without male heirs. Because of the increased prosperity this afforded the peasantry (supply and demand applies to labour as well: when there are fewer workers they can demand better terms) fertility rates may well have risen.
- Earlier marriage. Another result of people being more prosperous was that they could marry earlier. This meant they were much more likely to have children, or more children. Demographers have also calculated that the chance of survival at birth was getting higher (again, perhaps down to the increased prosperity).
You might well ask what any of this has to do with a "price revolution". Well, when aggregate demand increases sharply across an economy and output remains fairly static, demand-pull inflation2 occurs. Grain prices increased between five and six times over the Tudor century. The cost of living rose and people found themselves with an increasingly low standard of living. Wage rates went down as the available labour pool steadily rose and inevitably there were more people looking for work on the land than there were jobs for them to take. In part this fuelled the growth of the cities, where people would go to seek work, and the growth of "cottage industry" (often outside the traditional control of the guilds). Yet one of the remarkable things about Tudor England was its ability to feed itself amidst the general decline in standards of living. There was never an instance of mass mortality and Multhusian checks failed to kick in to keep the population low. Commercial farming by and large rose to meet the challenge.
Other factors than population growth contributed to the price rise, but the extent of their effect is much debated. The debasement of the coinage to pay for Continental war, especially that carried out by Henry VIII and Protector Somerset, decreased the value of coin in circulation (Gresham's Law, bad coin drives out the good: people horde the valuable coin and only use the least valuable)3. Inflation was certainly European wide (which is what made people eventually realise something so massive in scale could not be caused by trivial things like enclosure), and this has been put down to bullion flowing in from the New World and increased output from the silver mines of Bohemia.
The enclosure movement was much resented by contemporaries, and many blamed it for the general decline in their standard of living. In fact, the term "enclosure" is a bit of a blanket description for a few distinct practices, some of which were beneficial to the rural economy, and some beneficial to the national economy. Sometimes a small freeholder, often of villein stock, would buy up strips of land adjacent to his own and put a hedge around them, thus cutting them off from the open land, in a process called engrossing. This was not a new practice and it could increase the productivity of this land considerably. But the gentry began to engage in a practice that was more odious for their locale, but one which they were driven to by the price rise (their costs were rising but rents were static, so they had to do something to avoid going bankrupt). A gentleman might buy, as an outside speculator, large estates or stretches of open pasture, evict the tenants, and engross them. This depopulated the area and increased local unemployment, although the national economy benefitted from this consolidation. Finally, in very specific areas, a gentleman might enclose the common land, thus depriving everyone else of its benefit. This didn't very happen very often, but it was one of the specific agrarian grievances of Ket's rebellion.
Less enterprising gentlemen who did not wish to engage in the vicissitudes of commercial farming could try that trusty old expedient of rack-renting (raising rents or increasing the fine charged when one tenant succeeded the next). His ability to do this varied greatly on a case by case basis because there were many different types of agreements between landlord and tenant. A tenant-at-will was most vulnerable, because he essentially had no legal rights, and only held his land so long as his Lord willed it so. A "customary tenant" had specific rights and obligations as laid out in the manorial court roll, whereas a copyholder had a copy of his rights and obligations which he could produce in the King's courts. He was the most secure, but unless he possessed an 'estate of inheritance' then the Lord could impose an arbitrary fine on his heir when he wished to succeed to the land. This made it easy for a Lord to force the tenant out and switch to commercial farming practices if he wished, or force the tenant out and sell to an outside speculator.
The fluidity of the land market produced what amounted to a revolution in the agrarian life of England. There was great wealth in some areas and great poverty in others. Overall the national life prospered and the nation became wealthier. The revolution was certainly needed to lay down the path for easier times to come, and the stimulation provided by the high inflation - it stimulated because it created hardship - led to vital structural change and the final collapse of the feudal order in the South (there were instances in the conservative North of peers going bankrupt rather than give up the established order!). As in all other spheres of national life, the Tudor century contained much of the turmoil needed to consolidate things for the stability ahead.
1. The source for this is the Wrigley-Schofield Index from The Population History of England, 1541 - 1871: A reconstruction
2. It's a pretty primitive form of demand-pull inflation. In an industrial economy this sort of inflation is accentuated greatly because as prices rise, costs rise, which forces prices to rise, and so on... if you don't understand this, don't worry too much. In an agrarian economy, greater aggregate demand for a static amount of grain is clearly going to push grain prices up as it becomes scarcer.
3. Cardinal Wolsey, who started this, was in a way just following in the rest of Europe's footsteps. English coin contained much more silver than Continental coin at the start of the sixteenth century, the result being an inequity when trading it (Continental coin was worth less).
Elton, G. R. England Under the Tudors 2nd. ed.: Methuen & Co, 1974.
Guy, John. Tudor England: Oxford University Press, 1988.
Helm, P. J. England under the Tudors and Yorkists: 1471-1603: Bell & Hyman, 1968.
Lotherington, John. The Tudor Years: Hodder & Stoughton, 1994. | http://everything2.com/user/Noung/writeups/The+Price+Revolution+in+Tudor+England | 13 |
15 | In meteorology, precipitation (also known as one of the classes of hydrometeors, which are atmospheric water phenomena) is any product of the condensation of atmospheric water vapour that falls under gravity. The main forms of precipitation include drizzle, rain, sleet, snow, graupel and hail. Precipitation occurs when a local portion of the atmosphere becomes saturated with water vapour, so that the water condenses and "precipitates". Thus, fog and mist are not precipitation but suspensions, because the water vapour does not condense sufficiently to precipitate. Two processes, possibly acting together, can lead to air becoming saturated: cooling the air or adding water vapour to the air. Generally, precipitation will fall to the surface; an exception is Virga which evaporates before reaching the surface. Precipitation forms as smaller droplets coalesce via collision with other rain drops or ice crystals within a cloud. Rain drops range in size from oblate, pancake-like shapes for larger drops, to small spheres for smaller drops. Unlike raindrops, snowflakes grow in a variety of different shapes and patterns, determined by the temperature and humidity characteristics of the air the snowflake moves through on its way to the ground. While snow and ice pellets require temperatures close to the ground to be near or below freezing, hail can occur during much warmer temperature regimes due to the process of its formation.
Moisture overriding associated with weather fronts is an overall major method of precipitation production. If enough moisture and upward motion is present, precipitation falls from convective clouds such as cumulonimbus and can organize into narrow rainbands. Where relatively warm water bodies are present, for example due to water evaporation from lakes, lake-effect snowfall becomes a concern downwind of the warm lakes within the cold cyclonic flow around the backside of extratropical cyclones. Lake-effect snowfall can be locally heavy. Thundersnow is possible within a cyclone's comma head and within lake effect precipitation bands. In mountainous areas, heavy precipitation is possible where upslope flow is maximized within windward sides of the terrain at elevation. On the leeward side of mountains, desert climates can exist due to the dry air caused by compressional heating. The movement of the monsoon trough, or intertropical convergence zone, brings rainy seasons to savannah climes.
Precipitation is a major component of the water cycle, and is responsible for depositing the fresh water on the planet. Approximately 505,000 cubic kilometres (121,000 cu mi) of water falls as precipitation each year; 398,000 cubic kilometres (95,000 cu mi) of it over the oceans and 107,000 cubic kilometres (26,000 cu mi) over land. Given the Earth's surface area, that means the globally averaged annual precipitation is 990 millimetres (39 in), but over land it is only 715 millimetres (28.1 in). Climate classification systems such as the Köppen climate classification system use average annual rainfall to help differentiate between differing climate regimes.
|Part of the nature series|
Any phenomenon which was at some point produced due to condensation or precipitation of moisture within the Earth's atmosphere is known as a hydrometeor. Particles composed of fallen precipitation which fell onto the Earth's surface can become hydrometeors if blown off the landscape by wind. Formations due to condensation such as clouds, haze, fog, and mist are composed of hydrometeors. All precipitation types are hydrometeors by definition, including virga, which is precipitation which evaporates before reaching the ground. Particles removed from the Earth's surface by wind such as blowing snow and blowing sea spray are also hydrometeors.
Precipitation is a major component of the water cycle, and is responsible for depositing most of the fresh water on the planet. Approximately 505,000 km3 (121,000 mi3) of water falls as precipitation each year, 398,000 km3 (95,000 cu mi) of it over the oceans. Given the Earth's surface area, that means the globally averaged annual precipitation is 990 millimetres (39 in).
Mechanisms of producing precipitation include convective, stratiform, and orographic rainfall. Convective processes involve strong vertical motions that can cause the overturning of the atmosphere in that location within an hour and cause heavy precipitation, while stratiform processes involve weaker upward motions and less intense precipitation. Precipitation can be divided into three categories, based on whether it falls as liquid water, liquid water that freezes on contact with the surface, or ice. Mixtures of different types of precipitation, including types in different categories, can fall simultaneously. Liquid forms of precipitation include rain and drizzle. Rain or drizzle that freezes on contact within a subfreezing air mass is called "freezing rain" or "freezing drizzle". Frozen forms of precipitation include snow, ice needles, ice pellets, hail, and graupel.
How the air becomes saturated
Cooling air to its dew point
The dew point is the temperature to which a parcel must be cooled in order to become saturated, and (unless super-saturation occurs) condenses to water. Water vapour normally begins to condense on condensation nuclei such as dust, ice, and salt in order to form clouds. An elevated portion of a frontal zone forces broad areas of lift, which form clouds decks such as altostratus or cirrostratus. Stratus is a stable cloud deck which tends to form when a cool, stable air mass is trapped underneath a warm air mass. It can also form due to the lifting of advection fog during breezy conditions.
There are four main mechanisms for cooling the air to its dew point: adiabatic cooling, conductive cooling, radiational cooling, and evaporative cooling. Adiabatic cooling occurs when air rises and expands. The air can rise due to convection, large-scale atmospheric motions, or a physical barrier such as a mountain (orographic lift). Conductive cooling occurs when the air comes into contact with a colder surface, usually by being blown from one surface to another, for example from a liquid water surface to colder land. Radiational cooling occurs due to the emission of infrared radiation, either by the air or by the surface underneath. Evaporative cooling occurs when moisture is added to the air through evaporation, which forces the air temperature to cool to its wet-bulb temperature, or until it reaches saturation.
Adding moisture to the air
The main ways water vapour is added to the air are: wind convergence into areas of upward motion, precipitation or virga falling from above, daytime heating evaporating water from the surface of oceans, water bodies or wet land, transpiration from plants, cool or dry air moving over warmer water, and lifting air over mountains.
Coalescence occurs when water droplets fuse to create larger water droplets, or when water droplets freeze onto an ice crystal, which is known as the Bergeron process. The fall rate of very small droplets is negligible, hence clouds do not fall out of the sky; precipitation will only occur when these coalesce into larger drops. When air turbulence occurs, water droplets collide, producing larger droplets. As these larger water droplets descend, coalescence continues, so that drops become heavy enough to overcome air resistance and fall as rain.
Raindrops have sizes ranging from 0.1 millimetres (0.0039 in) to 9 millimetres (0.35 in) mean diameter, above which they tend to break up. Smaller drops are called cloud droplets, and their shape is spherical. As a raindrop increases in size, its shape becomes more oblate, with its largest cross-section facing the oncoming airflow. Contrary to the cartoon pictures of raindrops, their shape does not resemble a teardrop. Intensity and duration of rainfall are usually inversely related, i.e., high intensity storms are likely to be of short duration and low intensity storms can have a long duration. Rain drops associated with melting hail tend to be larger than other rain drops. The METAR code for rain is RA, while the coding for rain showers is SHRA.
Ice pellets
Ice pellets or sleet are a form of precipitation consisting of small, translucent balls of ice. Ice pellets are usually (but not always) smaller than hailstones. They often bounce when they hit the ground, and generally do not freeze into a solid mass unless mixed with freezing rain. The METAR code for ice pellets is PL.
Ice pellets form when a layer of above-freezing air exists with sub-freezing air both above and below. This causes the partial or complete melting of any snowflakes falling through the warm layer. As they fall back into the sub-freezing layer closer to the surface, they re-freeze into ice pellets. However, if the sub-freezing layer beneath the warm layer is too small, the precipitation will not have time to re-freeze, and freezing rain will be the result at the surface. A temperature profile showing a warm layer above the ground is most likely to be found in advance of a warm front during the cold season, but can occasionally be found behind a passing cold front.
Like other precipitation, hail forms in storm clouds when supercooled water droplets freeze on contact with condensation nuclei, such as dust or dirt. The storm's updraft blows the hailstones to the upper part of the cloud. The updraft dissipates and the hailstones fall down, back into the updraft, and are lifted again. Hail has a diameter of 5 millimetres (0.20 in) or more. Within METAR code, GR is used to indicate larger hail, of a diameter of at least 6.4 millimetres (0.25 in). GR is derived from the French word grêle. Smaller-sized hail, as well as snow pellets, use the coding of GS, which is short for the French word grésil. Stones just larger than golf ball-sized are one of the most frequently reported hail sizes. Hailstones can grow to 15 centimetres (6 in) and weigh more than .5 kilograms (1.1 lb). In large hailstones, latent heat released by further freezing may melt the outer shell of the hailstone. The hailstone then may undergo 'wet growth', where the liquid outer shell collects other smaller hailstones. The hailstone gains an ice layer and grows increasingly larger with each ascent. Once a hailstone becomes too heavy to be supported by the storm's updraft, it falls from the cloud.
Snow crystals form when tiny supercooled cloud droplets (about 10 μm in diameter) freeze. Once a droplet has frozen, it grows in the supersaturated environment. Because water droplets are more numerous than the ice crystals the crystals are able to grow to hundreds of micrometers or millimeters in size at the expense of the water droplets. This process is known as the Wegner-Bergeron-Findeison process. The corresponding depletion of water vapor causes the droplets to evaporate, meaning that the ice crystals grow at the droplets' expense. These large crystals are an efficient source of precipitation, since they fall through the atmosphere due to their mass, and may collide and stick together in clusters, or aggregates. These aggregates are snowflakes, and are usually the type of ice particle that falls to the ground. Guinness World Records list the world's largest snowflakes as those of January 1887 at Fort Keogh, Montana; allegedly one measured 38 cm (15 inches) wide. The exact details of the sticking mechanism remain a subject of research.
Although the ice is clear, scattering of light by the crystal facets and hollows/imperfections mean that the crystals often appear white in color due to diffuse reflection of the whole spectrum of light by the small ice particles. The shape of the snowflake is determined broadly by the temperature and humidity at which it is formed. Rarely, at a temperature of around −2 °C (28 °F), snowflakes can form in threefold symmetry—triangular snowflakes. The most common snow particles are visibly irregular, although near-perfect snowflakes may be more common in pictures because they are more visually appealing. No two snowflakes are alike, which grow at different rates and in different patterns depending on the changing temperature and humidity within the atmosphere that the snowflake falls through on its way to the ground. The METAR code for snow is SN, while snow showers are coded SHSN.
Diamond dust
Diamond dust, also known as ice needles or ice crystals, forms at temperatures approaching −40 °F (−40 °C) due to air with slightly higher moisture from aloft mixing with colder, surface based air. They are made of simple ice crystals that are hexagonal in shape. The METAR identifier for diamond dust within international hourly weather reports is IC.
Frontal activity
Stratiform or dynamic precipitation occurs as a consequence of slow ascent of air in synoptic systems (on the order of cm/s), such as over surface cold fronts, and over and ahead of warm fronts. Similar ascent is seen around tropical cyclones outside of the eyewall, and in comma-head precipitation patterns around mid-latitude cyclones. A wide variety of weather can be found along an occluded front, with thunderstorms possible, but usually their passage is associated with a drying of the air mass. Occluded fronts usually form around mature low-pressure areas. Precipitation may occur on celestial bodies other than Earth. When it gets cold, Mars has precipitation that most likely takes the form of ice needles, rather than rain or snow.
Convective rain, or showery precipitation, occurs from convective clouds, e.g., cumulonimbus or cumulus congestus. It falls as showers with rapidly changing intensity. Convective precipitation falls over a certain area for a relatively short time, as convective clouds have limited horizontal extent. Most precipitation in the tropics appears to be convective; however, it has been suggested that stratiform precipitation also occurs. Graupel and hail indicate convection. In mid-latitudes, convective precipitation is intermittent and often associated with baroclinic boundaries such as cold fronts, squall lines, and warm fronts.
Orographic effects
Orographic precipitation occurs on the windward side of mountains and is caused by the rising air motion of a large-scale flow of moist air across the mountain ridge, resulting in adiabatic cooling and condensation. In mountainous parts of the world subjected to relatively consistent winds (for example, the trade winds), a more moist climate usually prevails on the windward side of a mountain than on the leeward or downwind side. Moisture is removed by orographic lift, leaving drier air (see katabatic wind) on the descending and generally warming, leeward side where a rain shadow is observed.
In Hawaii, Mount Waiʻaleʻale, on the island of Kauai, is notable for its extreme rainfall, as it has the second highest average annual rainfall on Earth, with 460 inches (12,000 mm). Storm systems affect the state with heavy rains between October and March. Local climates vary considerably on each island due to their topography, divisible into windward (Koʻolau) and leeward (Kona) regions based upon location relative to the higher mountains. Windward sides face the east to northeast trade winds and receive much more rainfall; leeward sides are drier and sunnier, with less rain and less cloud cover.
In South America, the Andes mountain range blocks Pacific moisture that arrives in that continent, resulting in a desertlike climate just downwind across western Argentina. The Sierra Nevada range creates the same effect in North America forming the Great Basin and Mojave Deserts.
Extratropical cyclones can bring cold and dangerous conditions with heavy rain and snow with winds exceeding 119 km/h (74 mph), (sometimes referred to as windstorms in Europe). The band of precipitation that is associated with their warm front is often extensive, forced by weak upward vertical motion of air over the frontal boundary which condenses as it cools and produces precipitation within an elongated band, which is wide and stratiform, meaning falling out of nimbostratus clouds. When moist air tries to dislodge an arctic air mass, overrunning snow can result within the poleward side of the elongated precipitation band. In the Northern Hemisphere, poleward is towards the North Pole, or north. Within the Southern Hemisphere, poleward is towards the South Pole, or south.
Southwest of extratropical cyclones, curved cyclonic flow bringing cold air across the relatively warm water bodies can lead to narrow lake-effect snow bands. Those bands bring strong localized snowfall which can be understood as follows: Large water bodies such as lakes efficiently store heat that results in significant temperature differences (larger than 13 °C or 23 °F) between the water surface and the air above. Because of this temperature difference, warmth and moisture are transported upward, condensing into vertically oriented clouds (see satellite picture) which produce snow showers. The temperature decrease with height and cloud depth are directly affected by both the water temperature and the large-scale environment. The stronger the temperature decrease with height, the deeper the clouds get, and the greater the precipitation rate becomes.
In mountainous areas, heavy snowfall accumulates when air is forced to ascend the mountains and squeeze out precipitation along their windward slopes, which in cold conditions, falls in the form of snow. Because of the ruggedness of terrain, forecasting the location of heavy snowfall remains a significant challenge.
Within the tropics
The wet, or rainy, season is the time of year, covering one or more months, when most of the average annual rainfall in a region falls. The term green season is also sometimes used as a euphemism by tourist authorities. Areas with wet seasons are dispersed across portions of the tropics and subtropics. Savanna climates and areas with monsoon regimes have wet summers and dry winters. Tropical rainforests technically do not have dry or wet seasons, since their rainfall is equally distributed through the year. Some areas with pronounced rainy seasons will see a break in rainfall mid-season when the intertropical convergence zone or monsoon trough move poleward of their location during the middle of the warm season. When the wet season occurs during the warm season, or summer, rain falls mainly during the late afternoon and early evening hours. The wet season is a time when air quality improves, freshwater quality improves, and vegetation grows significantly. Soil nutrients diminish and erosion increases. Animals have adaptation and survival strategies for the wetter regime. The previous dry season leads to food shortages into the wet season, as the crops have yet to mature. Developing countries have noted that their populations show seasonal weight fluctuations due to food shortages seen before the first harvest, which occurs late in the wet season.
Tropical cyclones, a source of very heavy rainfall, consist of large air masses several hundred miles across with low pressure at the centre and with winds blowing inward towards the centre in either a clockwise direction (southern hemisphere) or counterclockwise (northern hemisphere). Although cyclones can take an enormous toll in lives and personal property, they may be important factors in the precipitation regimes of places they impact, as they may bring much-needed precipitation to otherwise dry regions. Areas in their path can receive a year's worth of rainfall from a tropical cyclone passage.
Large-scale geographical distribution
On the large scale, the highest precipitation amounts outside topography fall in the tropics, closely tied to the Intertropical Convergence Zone, itself the ascending branch of the Hadley cell. Mountainous locales near the equator in Colombia are amongst the wettest places on Earth. North and south of this are regions of descending air that form subtropical ridges where precipitation is low; the land surface underneath is usually arid, which forms most of the Earth's deserts. An exception to this rule is in Hawaii, where upslope flow due to the trade winds lead to one of the wettest locations on Earth. Otherwise, the flow of the Westerlies into the Rocky Mountains lead to the wettest, and at elevation snowiest, locations within North America. In Asia during the wet season, the flow of moist air into the Himalayas leads to some of the greatest rainfall amounts measured on Earth in northeast India.
The standard way of measuring rainfall or snowfall is the standard rain gauge, which can be found in 100 mm (4 in) plastic and 200 mm (8 in) metal varieties. The inner cylinder is filled by 25 mm (1 in) of rain, with overflow flowing into the outer cylinder. Plastic gauges have markings on the inner cylinder down to 0.25 mm (0.01 in) resolution, while metal gauges require use of a stick designed with the appropriate 0.25 mm (0.01 in) markings. After the inner cylinder is filled, the amount inside it is discarded, then filled with the remaining rainfall in the outer cylinder until all the fluid in the outer cylinder is gone, adding to the overall total until the outer cylinder is empty. These gauges are used in the winter by removing the funnel and inner cylinder and allowing snow and freezing rain to collect inside the outer cylinder. Some add anti-freeze to their gauge so they do not have to melt the snow or ice that falls into the gauge. Once the snowfall/ice is finished accumulating, or as 300 mm (12 in) is approached, one can either bring it inside to melt, or use lukewarm water to fill the inner cylinder with in order to melt the frozen precipitation in the outer cylinder, keeping track of the warm fluid added, which is subsequently subtracted from the overall total once all the ice/snow is melted.
Other types of gauges include the popular wedge gauge (the cheapest rain gauge and most fragile), the tipping bucket rain gauge, and the weighing rain gauge. The wedge and tipping bucket gauges will have problems with snow. Attempts to compensate for snow/ice by warming the tipping bucket meet with limited success, since snow may sublimate if the gauge is kept much above freezing. Weighing gauges with antifreeze should do fine with snow, but again, the funnel needs to be removed before the event begins. For those looking to measure rainfall the most inexpensively, a can that is cylindrical with straight sides will act as a rain gauge if left out in the open, but its accuracy will depend on what ruler is used to measure the rain with. Any of the above rain gauges can be made at home, with enough know-how.
When a precipitation measurement is made, various networks exist across the United States and elsewhere where rainfall measurements can be submitted through the Internet, such as CoCoRAHS or GLOBE. If a network is not available in the area where one lives, the nearest local weather office will likely be interested in the measurement.
Return period
The likelihood or probability of an event with a specified intensity and duration, is called the return period or frequency. The intensity of a storm can be predicted for any return period and storm duration, from charts based on historic data for the location. The term 1 in 10 year storm describes a rainfall event which is rare and is only likely to occur once every 10 years, so it has a 10 percent likelihood any given year. The rainfall will be greater and the flooding will be worse than the worst storm expected in any single year. The term 1 in 100 year storm describes a rainfall event which is extremely rare and which will occur with a likelihood of only once in a century, so has a 1 percent likelihood in any given year. The rainfall will be extreme and flooding to be worse than a 1 in 10 year event. As with all probability events, it is possible to have multiple "1 in 100 Year Storms" in a single year.
Role in Köppen climate classification
The Köppen classification depends on average monthly values of temperature and precipitation. The most commonly used form of the Köppen classification has five primary types labeled A through E. Specifically, the primary types are A, tropical; B, dry; C, mild mid-latitude; D, cold mid-latitude; and E, polar. The five primary classifications can be further divided into secondary classifications such as rain forest, monsoon, tropical savanna, humid subtropical, humid continental, oceanic climate, Mediterranean climate, steppe, subarctic climate, tundra, polar ice cap, and desert.
Rain forests are characterized by high rainfall, with definitions setting minimum normal annual rainfall between 1,750 millimetres (69 in) and 2,000 millimetres (79 in). A tropical savanna is a grassland biome located in semi-arid to semi-humid climate regions of subtropical and tropical latitudes, with rainfall between 750 millimetres (30 in) and 1,270 millimetres (50 in) a year. They are widespread on Africa, and are also found in India, the northern parts of South America, Malaysia, and Australia. The humid subtropical climate zone where winter rainfall (and sometimes snowfall) is associated with large storms that the westerlies steer from west to east. Most summer rainfall occurs during thunderstorms and from occasional tropical cyclones. Humid subtropical climates lie on the east side continents, roughly between latitudes 20° and 40° degrees away from the equator.
An oceanic (or maritime) climate is typically found along the west coasts at the middle latitudes of all the world's continents, bordering cool oceans, as well as southeastern Australia, and is accompanied by plentiful precipitation year round. The Mediterranean climate regime resembles the climate of the lands in the Mediterranean Basin, parts of western North America, parts of Western and South Australia, in southwestern South Africa and in parts of central Chile. The climate is characterized by hot, dry summers and cool, wet winters. A steppe is a dry grassland. Subarctic climates are cold with continuous permafrost and little precipitation.
Effect on agriculture
Precipitation, especially rain, has a dramatic effect on agriculture. All plants need at least some water to survive, therefore rain (being the most effective means of watering) is important to agriculture. While a regular rain pattern is usually vital to healthy plants, too much or too little rainfall can be harmful, even devastating to crops. Drought can kill crops and increase erosion, while overly wet weather can cause harmful fungus growth. Plants need varying amounts of rainfall to survive. For example, certain cacti require small amounts of water, while tropical plants may need up to hundreds of inches of rain per year to survive.
In areas with wet and dry seasons, soil nutrients diminish and erosion increases during the wet season. Animals have adaptation and survival strategies for the wetter regime. The previous dry season leads to food shortages into the wet season, as the crops have yet to mature. Developing countries have noted that their populations show seasonal weight fluctuations due to food shortages seen before the first harvest, which occurs late in the wet season.
Changes due to global warming
Increasing temperatures tend to increase evaporation which leads to more precipitation. Precipitation has generally increased over land north of 30°N from 1900 through 2005 but has declined over the tropics since the 1970s. Globally there has been no statistically significant overall trend in precipitation over the past century, although trends have varied widely by region and over time. Eastern portions of North and South America, northern Europe, and northern and central Asia have become wetter. The Sahel, the Mediterranean, southern Africa and parts of southern Asia have become drier. There has been an increase in the number of heavy precipitation events over many areas during the past century, as well as an increase since the 1970s in the prevalence of droughts—especially in the tropics and subtropics. Changes in precipitation and evaporation over the oceans are suggested by the decreased salinity of mid- and high-latitude waters (implying more precipitation), along with increased salinity in lower latitudes (implying less precipitation, more evaporation, or both). Over the contiguous United States, total annual precipitation increased at an average rate of 6.1 percent per century since 1900, with the greatest increases within the East North Central climate region (11.6 percent per century) and the South (11.1 percent). Hawaii was the only region to show a decrease (-9.25 percent).
Changes due to urban heat island
The urban heat island warms cities 0.6 °C (1.1 °F) to 5.6 °C (10.1 °F) above surrounding suburbs and rural areas. This extra heat leads to greater upward motion, which can induce additional shower and thunderstorm activity. Rainfall rates downwind of cities are increased between 48% and 116%. Partly as a result of this warming, monthly rainfall is about 28% greater between 20 miles (32 km) to 40 miles (64 km) downwind of cities, compared with upwind. Some cities induce a total precipitation increase of 51%.
The Quantitative Precipitation Forecast (abbreviated QPF) is the expected amount of liquid precipitation accumulated over a specified time period over a specified area. A QPF will be specified when a measurable precipitation type reaching a minimum threshold is forecast for any hour during a QPF valid period. Precipitation forecasts tend to be bound by synoptic hours such as 0000, 0600, 1200 and 1800 GMT. Terrain is considered in QPFs by use of topography or based upon climatological precipitation patterns from observations with fine detail. Starting in the mid to late 1990s, QPFs were used within hydrologic forecast models to simulate impact to rivers throughout the United States. Forecast models show significant sensitivity to humidity levels within the planetary boundary layer, or in the lowest levels of the atmosphere, which decreases with height. QPF can be generated on a quantitative, forecasting amounts, or a qualitative, forecasting the probability of a specific amount, basis. Radar imagery forecasting techniques show higher skill than model forecasts within six to seven hours of the time of the radar image. The forecasts can be verified through use of rain gauge measurements, weather radar estimates, or a combination of both. Various skill scores can be determined to measure the value of the rainfall forecast.
See also
- List of meteorology topics
- Basic precipitation
- Mango showers, pre-monsoon showers in the Indian states of Karnataka and Kerala that help in the ripening of mangoes.
- Sunshower, an unusual meteorological phenomenon in which rain falls while the sun is shining.
- Wintry showers, an informal meteorological term for various mixtures of rain, freezing rain, sleet and snow.
- "Precipitation". Glossary of Meteorology. American Meteorological Society. 2009. Retrieved 2009-01-02.
- Dr. Chowdhury's Guide to Planet Earth (2005). "The Water Cycle". WestEd. Retrieved 2006-10-24.
- Dr. Jim Lochner (1998). "Ask an Astrophysicist". NASA Goddard Space Flight Center. Retrieved 2009-01-16.
- Glossary of Meteorology (2009). "Hydrometeor". American Meteorological Society. Retrieved 2009-07-16.
- The American Meteor Society (2001-08-27). "Definition of terms by the IAU Commission 22, 1961". Archived from the original on 2009-04-20. Retrieved 2009-07-16.
- Emmanouil N. Anagnostou (2004). "A convective/stratiform precipitation classification algorithm for volume scanning weather radar observations". Meteorological Applications (Cambridge University Press) 11 (4): 291–300. Bibcode:2004MeApp..11..291A. doi:10.1017/S1350482704001409.
- A.J. Dore, M. Mousavi-Baygi, R.I. Smith, J. Hall, D. Fowler and T.W. Choularton (June 2006). "A model of annual orographic precipitation and acid deposition and its application to Snowdonia". Atmosphere Environment 40 (18): 3316–3326. doi:10.1016/j.atmosenv.2006.01.043.
- Robert Penrose Pearce (2002). Meteorology at the Millennium. Academic Press. p. 66. ISBN 978-0-12-548035-2. Retrieved 2009-01-02.
- Jan Jackson (2008). "All About Mixed Winter Precipitation". National Weather Service. Retrieved 2009-02-07.
- Glossary of Meteorology (June 2000). "Dewpoint". American Meteorological Society. Retrieved 2011-01-31.
- FMI (2007). "Fog And Stratus - Meteorological Physical Background". Zentralanstalt für Meteorologie und Geodynamik. Retrieved 2009-02-07.
- Glossary of Meteorology (2009). "Adiabatic Process". American Meteorological Society. Retrieved 2008-12-27.
- TE Technology, Inc (2009). "Peltier Cold Plate". Retrieved 2008-12-27.
- Glossary of Meteorology (2009). "Radiational cooling". American Meteorological Society. Retrieved 2008-12-27.
- Robert Fovell (2004). "Approaches to saturation". University of California in Los Angelese. Retrieved 2009-02-07.
- National Weather Service Office, Spokane, Washington (2009). "Virga and Dry Thunderstorms". Retrieved 2009-01-02.
- Bart van den Hurk and Eleanor Blyth (2008). "Global maps of Local Land-Atmosphere coupling". KNMI. Retrieved 2009-01-02.
- H. Edward Reiley, Carroll L. Shry (2002). Introductory horticulture. Cengage Learning. p. 40. ISBN 978-0-7668-1567-4. Retrieved 2011-01-31.
- National Weather Service JetStream (2008). "Air Masses". Retrieved 2009-01-02.
- Dr. Michael Pidwirny (2008). "CHAPTER 8: Introduction to the Hydrosphere (e). Cloud Formation Processes". Physical Geography. Retrieved 2009-01-01.
- Paul Sirvatka (2003). "Cloud Physics: Collision/Coalescence; The Bergeron Process". College of DuPage. Retrieved 2009-01-01.
- United States Geological Survey (2009). "Are raindrops tear shaped?". United States Department of the Interior. Retrieved 2008-12-27.
- J . S. 0guntoyinbo and F. 0. Akintola (1983). "Rainstorm characteristics affecting water availability for agriculture". IAHS Publication Number 140. Retrieved 2008-12-27.
- Robert A. Houze Jr (1997). "Stratiform Precipitation in Regions of Convection: A Meteorological Paradox?". Bulletin of the American Meteorological Society 78 (10): 2179–2196. Bibcode:1997BAMS...78.2179H. doi:10.1175/1520-0477(1997)078<2179:SPIROC>2.0.CO;2.
- Norman W. Junker (2008). "An ingredients based methodology for forecasting precipitation associated with MCS’s". Hydrometeorological Prediction Center. Retrieved 2009-02-07.
- Alaska Air Flight Service Station (2007-04-10). "SA-METAR". Federal Aviation Administration via the Internet Wayback Machine. Archived from the original on 2008-05-01. Retrieved 2009-08-29.
- "Hail (glossary entry)". National Oceanic and Atmospheric Administration's National Weather Service. Retrieved 2007-03-20.
- Weatherquestions.com. "What causes ice pellets (sleet)?". Retrieved 2007-12-08.
- Glossary of Meteorology (2009). "Hail". American Meteorological Society. Retrieved 2009-07-15.
- Ryan Jewell and Julian Brimelow (2004-08-17). "P9.5 Evaluation of an Alberta Hail Growth Model Using Severe Hail Proximity Soundings in the United States". Retrieved 2009-07-15.
- National Severe Storms Laboratory (2007-04-23). "Aggregate hailstone". National Oceanic and Atmospheric Administration. Retrieved 2009-07-15.
- Julian C. Brimelow, Gerhard W. Reuter, and Eugene R. Poolman (October 2002). "Modeling Maximum Hail Size in Alberta Thunderstorms". Weather and Forecasting 17 (5): 1048–1062. Bibcode:2002WtFor..17.1048B. doi:10.1175/1520-0434(2002)017<1048:MMHSIA>2.0.CO;2.
- Jacque Marshall (2000-04-10). "Hail Fact Sheet". University Corporation for Atmospheric Research. Retrieved 2009-07-15.
- M. Klesius (2007). "The Mystery of Snowflakes". National Geographic 211 (1): 20. ISSN 0027-9358.
- William J. Broad (2007-03-20). "Giant Snowflakes as Big as Frisbees? Could Be". New York Times. Retrieved 2009-07-12.
- Jennifer E. Lawson (2001). Hands-on Science: Light, Physical Science (matter) - Chapter 5: The Colors of Light. Portage & Main Press. p. 39. ISBN 978-1-894110-63-1. Retrieved 2009-06-28.
- Kenneth G. Libbrecht (2006-09-11). "Guide to Snowflakes". California Institute of Technology. Retrieved 2009-06-28.
- John Roach (2007-02-13). ""No Two Snowflakes the Same" Likely True, Research Reveals". National Geographic News. Retrieved 2009-07-14.
- Kenneth Libbrecht (Winter 2004/2005). "Snowflake Science". American Educator. Retrieved 2009-07-14.
- Glossary of Meteorology (June 2000). "Diamond Dust". American Meteorological Society. Retrieved 2010-01-21.
- Kenneth G. Libbrecht (2001). "Morphogenesis on Ice: The Physics of Snow Crystals". Engineering & Science (California Institute of Technology) (1): 12. Retrieved 2010-01-21.
- B. Geerts (2002). "Convective and stratiform rainfall in the tropics". University of Wyoming. Retrieved 2007-11-27.
- David Roth (2006). "Unified Surface Analysis Manual". Hydrometeorological Prediction Center. Retrieved 2006-10-22.
- Glossary of Meteorology (2009). "Graupel". American Meteorological Society. Retrieved 2009-01-02.
- Toby N. Carlson (1991). Mid-latitude Weather Systems. Routledge. p. 216. ISBN 978-0-04-551115-0. Retrieved 2009-02-07.
- Diana Leone (2002). "Rain supreme". Honolulu Star-Bulletin. Retrieved 2008-03-19.
- Western Regional Climate Center (2002). "Climate of Hawaii". Retrieved 2008-03-19.
- Paul E. Lydolph (1985). The Climate of the Earth. Rowman & Littlefield. p. 333. ISBN 978-0-86598-119-5. Retrieved 2009-01-02.
- Michael A. Mares (1999). Encyclopedia of Deserts. University of Oklahoma Press. p. 252. ISBN 978-0-8061-3146-7. Retrieved 2009-01-02.
- Adam Ganson (2003). "Geology of Death Valley". Indiana University. Retrieved 2009-02-07.
- Joan Von Ahn; Joe Sienkiewicz; Greggory McFadden (2005-04). "Hurricane Force Extratropical Cyclones Observed Using QuikSCAT Near Real Time Winds". Mariners Weather Log (Voluntary Observing Ship Program) 49 (1). Retrieved 2009-07-07.
- Owen Hertzman (1988). Three-Dimensional Kinematics of Rainbands in Midlatitude Cyclones Abstract. PhD thesis. University of Washington. Bibcode:1988PhDT.......110H.
- Yuh-Lang Lin (2007). Mesoscale Dynamics. Cambridge University Press. p. 405. ISBN 978-0-521-80875-0. Retrieved 2009-07-07.
- B. Geerts (1998). "Lake Effect Snow.". University of Wyoming. Retrieved 2008-12-24.
- Greg Byrd (1998-06-03). "Lake Effect Snow". University Corporation for Atmospheric Research. Retrieved 2009-07-12.
- Karl W. Birkeland and Cary J. Mock (1996). "Atmospheric Circulation Patterns Associated With Heavy Snowfall Events, Bridger Bowl, Montana, USA". Mountain Research and Development (International Mountain Society) 16 (3): 281–286. doi:10.2307/3673951. JSTOR 3673951.
- Glossary of Meteorology (2009). "Rainy season". American Meteorological Society. Retrieved 2008-12-27.
- Costa Rica Guide (2005). "When to Travel to Costa Rica". ToucanGuides. Retrieved 2008-12-27.
- Michael Pidwirny (2008). "CHAPTER 9: Introduction to the Biosphere". PhysicalGeography.net. Retrieved 2008-12-27.
- Elisabeth M. Benders-Hyde (2003). "World Climates". Blue Planet Biomes. Retrieved 2008-12-27.
- Mei Zheng (2000). "The sources and characteristics of atmospheric particulates during the wet and dry seasons in Hong Kong". University of Rhode Island. Retrieved 2008-12-27.
- S. I. Efe, F. E. Ogban, M. J. Horsfall, E. E. Akporhonor (2005). "Seasonal Variations of Physico-chemical Characteristics in Water Resources Quality in Western Niger Delta Region, Nigeria". Journal of Applied Scientific Environmental Management 9 (1): 191–195. ISSN 1119-8362. Retrieved 2008-12-27.
- C. D. Haynes, M. G. Ridpath, M. A. J. Williams (1991). Monsoonal Australia. Taylor & Francis. p. 90. ISBN 978-90-6191-638-3. Retrieved 2008-12-27.
- Marti J. Van Liere, Eric-Alain D. Ategbo, Jan Hoorweg, Adel P. Den Hartog, and Joseph G. A. J. Hautvast (1994). "The significance of socio-economic characteristics for adult seasonal body-weight fluctuations: a study in north-western Benin". British Journal of Nutrition (Cambridge University Press) 72 (3): 479–488. doi:10.1079/BJN19940049. PMID 7947661.
- Chris Landsea (2007). "Subject: D3 - Why do tropical cyclones' winds rotate counter-clockwise (clockwise) in the Northern (Southern) Hemisphere?". National Hurricane Center. Retrieved 2009-01-02.
- Climate Prediction Center (2005). "2005 Tropical Eastern North Pacific Hurricane Outlook". National Oceanic and Atmospheric Administration. Retrieved 2006-05-02.
- Jack Williams (2005-05-17). "Background: California's tropical storms". USA Today. Retrieved 2009-02-07.
- National Climatic Data Center (2005-08-09). "Global Measured Extremes of Temperature and Precipitation". National Oceanic and Atmospheric Administration. Retrieved 2007-01-18.
- Dr. Owen E. Thompson (1996). Hadley Circulation Cell. Channel Video Productions. Retrieved on 2007-02-11.
- ThinkQuest team 26634 (1999). The Formation of Deserts. Oracle ThinkQuest Education Foundation. Retrieved on 2009-02-16.
- "USGS 220427159300201 1047.0 Mt. Waialeale Rain Gage nr Lihue, Kauai, HI". USGS Real-time rainfall data at Waiʻaleʻale Raingauge. Retrieved 2008-12-11.
- USA Today. Mt. Baker snowfall record sticks. Retrieved on 2008-02-29.
- National Weather Service Office, Northern Indiana (2009). "8 Inch Non-Recording Standard Rain Gauge". Retrieved 2009-01-02.
- Chris Lehmann (2009). "10/00". Central Analytical Laboratory. Retrieved 2009-01-02.
- National Weather Service Office Binghamton, New York (2009). "Rainguage Information". Retrieved 2009-01-02.
- National Weather Service (2009). "Glossary: W". Retrieved 2009-01-01.
- Discovery School (2009). "Build Your Own Weather Station". Discovery Education. Archived from the original on 2008-12-26. Retrieved 2009-01-02.
- "Community Collaborative Rain, Hail & Snow Network Main Page". Colorado Climate Center. 2009. Retrieved 2009-01-02.
- The Globe Program (2009). "Global Learning and Observations to Benefit the Environment Program". Retrieved 2009-01-02.
- National Weather Service (2009). "NOAA's National Weather Service Main Page". Retrieved 2009-01-01.
- Glossary of Meteorology (June 2000). "Return period". American Meteorological Society. Retrieved 2009-01-02.
- Glossary of Meteorology (June 2000). "Rainfall intensity return period". American Meteorological Society. Retrieved 2009-01-02.
- Boulder Area Sustainability Information Network (2005). "What is a 100 year flood?". Boulder Community Network. Retrieved 2009-01-02.
- Peel, M. C. and Finlayson, B. L. and McMahon, T. A. (2007). "Updated world map of the Köppen-Geiger climate classification". Hydrol. Earth Syst. Sci. 11: 1633–1644. doi:10.5194/hess-11-1633-2007. ISSN 1027-5606. (direct: Final Revised Paper)
- Susan Woodward (1997-10-29). "Tropical Broadleaf Evergreen Forest: The Rainforest". Radford University. Retrieved 2008-03-14.
- Susan Woodward (2005-02-02). "Tropical Savannas". Radford University. Retrieved 2008-03-16.
- "Humid subtropical climate". Encyclopædia Britannica. Encyclopædia Britannica Online. 2008. Retrieved 2008-05-14.
- Michael Ritter (2008-12-24). "Humid Subtropical Climate". University of Wisconsin–Stevens Point. Retrieved 2008-03-16.
- Lauren Springer Ogden (2008). Plant-Driven Design. Timber Press. p. 78. ISBN 978-0-88192-877-8.
- Michael Ritter (2008-12-24). "Mediterranean or Dry Summer Subtropical Climate". University of Wisconsin–Stevens Point. Retrieved 2009-07-17.
- Brynn Schaffner and Kenneth Robinson (2003-06-06). "Steppe Climate". West Tisbury Elementary School. Retrieved 2008-04-15.
- Michael Ritter (2008-12-24). "Subarctic Climate". University of Wisconsin–Stevens Point. Retrieved 2008-04-16.
- Bureau of Meteorology (2010). "Living With Drought". Commonwealth of Australia. Retrieved 2010-01-15.
- Robert Burns (2007-06-06). "Texas Crop and Weather". Texas A&M University. Retrieved 2010-01-15.
- James D. Mauseth (2006-07-07). "Mauseth Research: Cacti". University of Texas. Retrieved 2010-01-15.
- A. Roberto Frisancho (1993). Human Adaptation and Accommodation. University of Michigan Press, pp. 388. ISBN 978-0-472-09511-7. Retrieved on 2008-12-27.
- Climate Change Division (2008-12-17). "Precipitation and Storm Changes". United States Environmental Protection Agency. Retrieved 2009-07-17.
- Dale Fuchs (2005-06-28). "Spain goes hi-tech to beat drought". London: The Guardian. Retrieved 2007-08-02.
- Goddard Space Flight Center (2002-06-18). "[[NASA]] Satellite Confirms Urban Heat Islands Increase Rainfall Around Cities". National Aeronautics and Space Administration. Retrieved 2009-07-17. Wikilink embedded in URL title (help)[dead link]
- Jack S. Bushong (1999). "Quantitative Precipitation Forecast: Its Generation and Verification at the Southeast River Forecast Center". University of Georgia. Retrieved 2008-12-31.
- Daniel Weygand (2008). "Optimizing Output From QPF Helper". National Weather Service Western Region. Retrieved 2008-12-31.
- Noreen O. Schwein (2009). "Optimization of quantitative precipitation forecast time horizons used in river forecasts". American Meteorological Society. Retrieved 2008-12-31.
- Christian Keil, Andreas Röpnack, George C. Craig, and Ulrich Schumann (2008-12-31). "Sensitivity of quantitative precipitation forecast to height dependent changes in humidity". Geophysical Research Letters 35 (9): L09812. Bibcode:2008GeoRL..3509812K. doi:10.1029/2008GL033657.
- P. Reggiani and A. H. Weerts (2007). "Probabilistic Quantitative Precipitation Forecast for Flood Prediction: An Application". Journal of Hydrometeorology 9 (1): 76–95. Bibcode:2008JHyMe...9...76R. doi:10.1175/2007JHM858.1. Retrieved 2008-12-31.
- Charles Lin (2005). "Quantitative Precipitation Forecast (QPF) from Weather Prediction Models and Radar Nowcasts, and Atmospheric Hydrological Modelling for Flood Simulation". Achieving Technological Innovation in Flood Forecasting Project. Retrieved 2009-01-01.
|Look up precipitation in Wiktionary, the free dictionary.|
- World precipitation map
- Collision/Coalescence; The Bergeron Process
- Report local rainfall inside the United States at this site (CoCoRaHS)
- Report local rainfall related to tropical cyclones worldwide at this site
- Global Precipitation Climatology Center GPCC | http://en.wikipedia.org/wiki/Precipitation_(meteorology) | 13 |
24 | Mouse-over a link for a quick definition or click to read more in-depth!
- Signed in 1991 in the city of Maastricht in the Netherlands, this treaty created the European Union and laid out the plans for the formation of a monetary union by 1999.
- It was understood that in order for the monetary union to be successful, its members needed to be part of an “optimal currency area”, and that stability among members was extremely important. In order to meet these requirements, the Maastricht Treaty set out convergence and stability criteria that had to be met before a country could become a member of the EMU. The criteria were as follows (see here):
- Inflation was to be no more than 1.5 percentage points above that of the 3 lowest inflation rates in EMU members.
- This was to insure that monetary policies were similar across countries as well as to gauge whether a country was susceptible to asymmetric shocks.
- Government deficits were limited to be no larger than 3 percent of GDP.
- This was to promote stability by overcoming Europe’s deficit bias.
- Government debt was limited to be no larger than 60 percent of GDP.
- This rule was not enforced, as most EMU members were unable to meet this criterion before 1999. As long as a potential member was reducing debt levels (through good management of deficit positions) they were allowed to enter the EMU.
- The potential member had to demonstrate exchange rate stability by being a member in the exchange rate mechanism (ERM) for at least 2 years prior to joining the EMU. In the ERM a country’s central bank is required to keep exchange rate fluctuations within a specified rage.
- This was again used to align monetary policy before joining the union, as well ensures a proper exchange rate once the local currency was exchanged for euros.
- The long-term interest rate is not to exceed the lowest 3 rates among EMU members (or potential members) by more than 2 percentage points.
- This was to ensure that the fundamentals of the economy were similar across potential members.
- Each new member of the EU must meet these criteria before they can enter the EMU.
- Once a country becomes a member of the EMU, it no longer must abide by the Maastricht treaty (in the case of exchange rates, inflation, and the long-term interest rate, it really doesn’t have the control to maintain these economic variables).
- Once in the EMU, a country must abide by the Stability and Growth Pact.
- Inflation convergence: Inflation rates dramatically improved and converged in the run up to joining the EMU in 1999.
- Deficits: Every member (with the exception of Greece who met the criteria by 2000) was able to bring their deficit to GDP ratio within 3 percent by 1999.
- Debt: Very few members met the 60 percent debt to GDP ratio, but authorities are pleased to see the general decline in the debt levels.
- Exchange rates: Each member was able to stay within the ERM within 2 years of joining the EMU.
- Long-run interest rates: Every member was successful in bringing long-term interest rates into line. | http://www.unc.edu/depts/europe/euroeconomics/Maastricht%20Treaty.php | 13 |
35 | Agent Pincher: The Case of the UFO--Unfamiliar Foreign Objects. That is what currency from another country may look like. Sometimes when people first try to use money from another country, they feel like they are playing with toy money-it is a different size, color, and shape, compared to one's own national currency, and it often comes with unfamiliar writing. As a special agent, your job is get the facts on these UFOs and compile a profile for guide book for your section.
- Describe the currency from other countries and explain how foreign currency functions in the same way as United States currency.
- Identify at least one foreign currency and the country that uses that currency, and be able to complete one calculation of the exchange rate between U.S. dollars and that currency.
- Identify economic characteristics (indicators) of other countries.
This lesson enables you to introduce the concepts of trade, foreign currency, exchange rates, imports and exports to your students. If you have not taught your students about the characteristics and functions of money, you may wish to explore one of the EconEdLink lessons listed in the Resources before proceeding with this lesson. The students will need a basic understanding of money’s function as a medium of exchange.
The students will research a specific country, gather data and share their findings with the class by creating a fact book.
Lessons to consider teaching before this lesson:
- Agent Pincher: The Case of the Missing Susan B. Anthony Dollar
- Agent Pincher: P is for Penny, or Where Did Money Come From?
- The Need for Money That Everyone Can Use
Lessons to consider teaching before this lesson:
Student Notebook: The students will need to fill this out to complete the lesson.
Teacher Briefing: This worksheet can be used to brief the class and prepare them for the lesson.
International Currency Factbook: This EconEdLink Worksheet allows students to compare international currency.
International Currency Factbook
The UN Cyber School Bus: Here the students can explore information about various countries around the world, according to categories such as Economy, Health, Environment, etc.
CIA World Fact Book. This location will provide students with easy access to information that will help them complete their Agent Notebook on their selected country.
International Bank Note Society: Using this site, students can print out a picture of the front and back of their country's most recent currency.
Central Intelligence Agency: The students can explore the following site to find information about various countries and their import/export commodities.
- Import Commodities
Lost Memo: This memo will provide good practice for students in their attempts to understand exchange rates.
- Make sure the students are prepared to begin this lesson, and they have their Student Notebook. Three lessons are posted in the resource section as background material for this lesson. Also, the Teacher's Briefing can be used as a starting point.
- Explain that the students will be investigating something that is probably not familiar to them (unless they have had the opportunity for foreign travel) – foreign currency.
- Select approximately 10 countries more than the number of students in your class. You may wish to focus your selections on a region of the world that you wish to introduce, or one that is included in your curriculum. You may also let the students sign up for their own choices--but avoid duplication. You may choose to group students in pairs to accomplish the research.
- Print out the International Currency Fact Sheet. Post this in a convenient location so that students can enter data regarding the currency of the country of their research.
Have the students print out a copy of the Agent Notebook. The students will complete their notebooks by following the assignments listed below.
The students will visit the UN Cyber School Bus to view the country they have picked and to obtain a picture and some general information about that country.
Have the students use the to complete assignment 2 in the Agent Notebook.
Have the students fill in the International Currency Factsheet with the information they have gained through research.
Have the students print out a picture of the front and back of their country's most recent currency. Pictures of currency can be viewed at the International Bank Note Society . Students will need first to select a language: then they should select "paper money virtual gallery." Once there, the students need to select "banknotes," which leads them to different maps of different continents. Have the students find their country by clicking on a the continent it belongs to.
Have the students complete assignment 4 in their Agent Notebook.
A calculator is recommended for assignment 4.
You may choose to use a world map and place push pins in the capital cities of the researched countries. When the students have identified exports and imports, you might use different-colored threads to link countries to their trading partners. This will illustrate the interdependence of nations in a global market.
At the conclusion of the activity, compile the students' completed Agent Notebooks to create a factbook for the countries researched.
Take a moment to review with your students the names of the different currencies their countries use. Also, take note of how their currencies stack up against the U.S. dollar. Using assignment 4 from their completed Agent Notebooks, you can show the class these exchange relations. Conclude the lesson by asking the class to discuss the following questions:
1. What is an export? Who receives the commodities?
2. Why do people trade?
3. What is the UFO? – Unidentified Foreign Objects? (Currency from other countries.)
4. Why doesn’t everybody in the world use U.S. dollars? Or euros?
5. Why do we no longer use pieces of gold for exchange?
As a final challenge, present this lost memo and see if any students can figure out the exchange rate, using the knowledge they have gathered from the lesson. This memo can be used from the Web site or it can be printed out and distributed among the students. The information found in the memo was taken from one of the Harry Potter books.
[It has been a mystery to figure out how much the Harry Potter currency is worth. If I got this memo, first I'd figure out: $250 million = 34 million Galleon. What would 1 Galleon be worth? Then maybe I'd figure out (in today's dollars since that was back in 1985) the next step: If something was worth $250 million dollars in 1985, how much is that in 2005? From there (assuming a fixed exchange rate), how many Galleons is it worth today? And then the kids could figure the number of Knuts and Sickles.]
- Ask the students to create a memo to the Big Bosses, providing a one-page briefing on the country they have researched.
- For an integrated assignment, have the students create drawings, or find pictures on the Web, of the top three items their country exports, and have them identify other students who represent countries with which they have trade relations. If students are able to handle the math, have them identify what the exchange rate would be between countries (not including the U.S. dollar).
When you have finished the main lesson, your students might have enough energy left to pursue more information and make some comparisons. You may wish to divide your students into groups and instruct them to return to the UN Cyber School Bus site.
Instruct the students to enter their countries' names again and then select go. Here they should be patient: there is a lot of information to bring together. As a group, they may select up to 6 comparison data categories. Don't forget to look for the small printer icon to print out the data. Once they have their data, the students can make inferences based on the information they have found. For example: They can find the population of China and Australia and also the surface area of these two countries. They will see that China has many more people living in a smaller area: therefore, China would be very crowded compared to Australia.
Have the students write down their findings and ,with time permitting, you can have a class discussion about their inferences. Identity factors you'd like to have them use in making comparisons about each country: the economy and technology categories, for example, will provide you several items worthy of class discussion. Suggest selected categories that would be common for each group to use as a basis of comparison: Population, economy, health, technology, environment.
“I find your lesson very helpful and informative. It facilitates easy instructions. Thank you.” | http://www.econedlink.org/lessons/index.php?lid=605&type=educator | 13 |
14 | This topic covers infections of the middle ear, commonly called ear infections. For information on outer ear infections, see the topic Ear Canal Problems (Swimmer's Ear). For information on inner ear infections, see the topic Labyrinthitis.
The middle ear is the small part of your ear behind your eardrum. It can get infected when germs from the nose and throat are trapped there.
A small tube connects your ear to your throat. These two tubes are called eustachian tubes (say "yoo-STAY-shee-un"). A cold can cause this tube to swell. When the tube swells enough to become blocked, it can trap fluid inside your ear. This makes it a perfect place for germs to grow and cause an infection.
Ear infections happen mostly to young children, because their tubes are smaller and get blocked more easily.
The main symptom is an earache. It can be mild, or it can hurt a lot. Babies and young children may be fussy. They may pull at their ears and cry. They may have trouble sleeping. They may also have a fever.
You may see thick, yellow fluid coming from their ears. This happens when the infection has caused the eardrum to burst and the fluid flows out. This is not serious and usually makes the pain go away. The eardrum usually heals on its own.
When fluid builds up but does not get infected, children often say that their ears just feel plugged. They may have trouble hearing, but their hearing usually returns to normal after the fluid is gone. It may take weeks for the fluid to drain away.
Your doctor will talk to you about your child's symptoms. Then he or she will look into your child's ears. A special tool with a light lets the doctor see the eardrum and tell whether there is fluid behind it. This exam is rarely uncomfortable. It bothers some children more than others.
Most ear infections go away on their own, although antibiotics are recommended for children under the age of 2 and for children at high risk for complications. You can treat your child at home with an over-the-counter pain reliever like acetaminophen (such as Tylenol), a warm washcloth or heating pad on the ear, and rest. Do not give aspirin to anyone younger than 20. Your doctor may give you eardrops that can help your child's pain.
Sometimes after an infection, a child cannot hear well for a while. Call your doctor if this lasts for 3 to 4 months. Children need to be able to hear in order to learn how to talk.
Your doctor can give your child antibiotics, but ear infections often get better without them. Talk about this with your doctor. Whether you use them will depend on how old your child is and how bad the infection is.
Minor surgery to put tubes in the ears may help if your child has hearing problems or repeat infections.
There are many ways to help prevent ear infections. Do not smoke. Ear infections happen more often to children who are around cigarette smoke. Even the fumes from tobacco smoke on your hair and clothes can affect them. Hand-washing and having your child immunized can help, too.
Also, make sure your child does not go to sleep while sucking on a bottle. And try to limit the use of group child care.
Learning about ear infections:
Helping a sick child:
Health Tools help you make wise health decisions or take action to improve your health.
|Decision Points focus on key medical care decisions that are important to many health problems.|
|Ear Infection: Should I Give My Child Antibiotics?|
|Ear Problems: Should My Child Be Treated for Fluid Buildup in the Middle Ear?|
Middle ear infections are caused by bacteria and viruses.
During a cold, sinus or throat infection, or an allergy attack, the eustachian tubes, which connect the middle ears to the throat, can become blocked. This stops fluid from draining from the middle ear. This fluid is a perfect breeding ground for bacteria or viruses to grow into an ear infection.
When swelling from an upper respiratory infection or allergy blocks the eustachian tube, air can't reach the middle ear. This creates a vacuum and suction, which pulls fluid and germs from the nose and throat into the middle ear. The swollen tube prevents this fluid from draining. An ear infection begins when bacteria or viruses in the trapped fluid grow into an infection.
Inflammation and fluid buildup can occur without infection and cause a feeling of stuffiness in the ears. This is known as otitis media with effusion.
Symptoms of a middle ear infection (acute otitis media) often start 2 to 7 days after the start of a cold or other upper respiratory infection. Symptoms of an ear infection may include:
Symptoms of fluid buildup may include:
Some children don't have any symptoms with this condition.
Middle ear infections usually occur along with an upper respiratory infection (URI), such as a cold. During a URI, the lining of the eustachian tube can swell and block the tube. Fluid builds up in the middle ear, creating a perfect breeding ground for bacteria or viruses to grow into an ear infection.
Pus develops as the body tries to fight the ear infection. More fluid collects and pushes against the eardrum, causing pain and sometimes problems hearing. Fever generally lasts a few days. And pain and crying usually last for several hours. After that, most children have some pain on and off for several days, although young children may have pain that comes and goes for more than a week. Antibiotic treatment may shorten some symptoms. But most of the time the immune system can fight infection and heal the ear infection without the use of these medicines. Children under 2 are treated with antibiotics, because they are more likely to have complications from the ear infection.
In severe cases, too much fluid can increase pressure on the eardrum until it ruptures, allowing the fluid to drain. When this happens, fever and pain usually go away and the infection clears. The eardrum usually heals on its own, often in just a couple of weeks.
Sometimes complications, such as a condition called chronic suppurative otitis media (an ear infection with chronic drainage), can arise from repeat ear infections.
Most children who have ear infections still have some fluid behind the eardrum a few weeks after the infection is gone. For some children, the fluid clears in about a month. And a few children still have fluid buildup (effusion) several months after an ear infection clears. This fluid buildup in the ear is called otitis media with effusion. Hearing problems can result, because the fluid affects how the middle ear works. Usually, infection does not occur.
Otitis media with fluid buildup (effusion) may occur even if a child has not had an obvious ear infection or upper respiratory infection. In these cases, something else has caused eustachian tube blockage.
In rare cases, complications can arise from middle ear infection or fluid buildup. Examples include hearing loss and ruptured eardrum.
Some factors that increase the risk for middle ear infection (acute otitis media) are out of your control. These include:
Other factors that increase the risk for ear infection include:
Factors that increase the risk for repeated ear infections also include:
Call your doctor immediately if:
Call your doctor if:
Watchful waiting is when you and your doctor watch symptoms to see if the health problem improves on its own. If it does, no treatment is necessary. If the symptoms don't get better or get worse, then it’s time to take the next treatment step.
Your doctor may recommend watchful waiting if your child is 2 years of age or older, has mild ear pain, and is otherwise healthy. Most ear infections get better without antibiotics. But if your child's pain doesn't get better with nonprescription children's pain reliever (such as acetaminophen) or the symptoms continue after 48 hours, call a doctor.
Health professionals who can diagnose and treat ear infections (acute otitis media) include:
Children who have ear infections often may need to see one of these specialists:
To prepare for your appointment, see the topic Making the Most of Your Appointment.
Middle ear infections are usually diagnosed using a health history, a physical exam, and an ear exam.
With a middle ear infection, the eardrum, when seen through a pneumatic otoscope, is red or yellow and bulging. In the case of fluid buildup without infection (otitis media with effusion), the eardrum can look like it's bulging or sucking in. In both cases, the eardrum doesn't move freely when the pneumatic otoscope pushes air into the ear.
Other tests can include:
Treatment for middle ear infections (acute otitis media) involves home treatment for symptom relief.
Your doctor can give your child antibiotics, but ear infections often get better without them. Talk about this with your doctor. Whether you use antibiotics will depend on how old your child is and how bad the infection is.
Follow-up exams with a doctor are important to check for persistent infection, fluid behind the eardrum (otitis media with effusion), or repeat infections.
The first treatment of a middle ear infection focuses on relieving pain. The doctor will also assess your child for any risk of complications.
If your child has an ear infection and appears very ill, is younger than 2, or is at risk for complications from the infection, your doctor will likely give antibiotics right away.
If your child has cochlear implants, your doctor will probably prescribe antibiotics, because bacterial meningitis is more common in children who have cochlear implants than in children who do not have cochlear implants.
For children ages 2 and older, more options are available. Some doctors prescribe antibiotics for all ear infections, because it's hard to tell which ear infections will clear up on their own. Other doctors ask parents to watch their child's symptoms for a couple of days, since most ear infections get better without treatment. Antibiotic treatment has only minimal benefits in reducing pain and fever. The cost of medicine and possible side effects are factors doctors consider before giving antibiotics. Also, many doctors are concerned about the growing number of bacteria that are becoming resistant to antibiotics because of frequent use of antibiotics.
If your child's condition improves in the first couple of days, treating the symptoms at home may be all that is needed. Some steps you can take at home to treat ear infection include:
If your child isn't better after a couple of days of home treatment, call your doctor. He or she may prescribe antibiotics.
Decongestants, antihistamines, and other over-the-counter cold remedies do not often work for treating or preventing ear infection. Antihistamines that cause sleepiness may thicken fluids, which can make your child feel worse. Check with the doctor before giving these medicines to your child. Experts say not to give decongestants to children younger than 2.
If your child with an ear infection must take an airplane trip, talk with your doctor about how to cope with ear pain during the trip.
Fluid behind the eardrum after an ear infection is normal. And in most children, the fluid clears up within 3 months without treatment. Test your child's hearing if the fluid persists past that point. If hearing is normal, you may choose to continue monitoring your child without treatment.
If a child has repeat ear infections (three or more ear infections in a 6-month period or four in 1 year), you may want to consider treatment to prevent future infections.
One option used a lot in the past is long-term oral antibiotic treatment. There is debate within the medical community about using antibiotics on a long-term basis to prevent ear infections. Many doctors don't want to prescribe long-term antibiotics, because they are not sure that they really work. Also, when antibiotics are used too often, bacteria can become resistant to antibiotics. Having tubes put in the ears is another option for treating repeat ear infections.
If your child has fluid buildup without infection, you may try watchful waiting. Fluid behind the eardrum after an ear infection is normal. In most children, the fluid clears up within a few months without treatment. Have your child's hearing tested if the fluid persists past 3 months. If hearing is normal, you may choose to keep watching your child without treatment.
If a child has fluid behind the eardrum for more than 3 months and has significant hearing problems, treatment is needed. Sometimes short-term hearing loss occurs, which is especially a concern in children ages 2 and younger. Normal hearing is very important when young children are learning to talk.
Doctors may consider surgery for children with repeat ear infections or those with persistent fluid behind the eardrum. Procedures include inserting ear tubes or removing adenoids and, in rare cases, the tonsils.
Inserting tubes into the eardrum (myringotomy or tympanostomy with tube placement) allows fluid to drain from the middle ear. The tubes keep fluid from building up and may prevent repeat ear infections. These tubes stay in place for 6 to 12 months and then fall out on their own. If needed, tubes are inserted again if more fluid builds up. About 8 out of 10 children need no further treatment after tubes are inserted for otitis media with effusion.3
You can use antibiotic eardrops for ear infections while tubes are in place. In some cases, antibiotic eardrops seem to work better than antibiotics by mouth when tubes are present.4
While tubes are in place, your doctor will recommend ear protection, including caution with water. The ear could get infected if any germs in the water get into the ear.
As a treatment for chronic ear infections, experts recommend removing adenoids and tonsils only after tubes and antibiotics have failed. Removing adenoids may improve air and fluid flow in nasal passages. This may reduce the chance of fluid collecting in the middle ear, which can lead to infection. Tonsils are removed if they are frequently infected. Experts do not recommend tonsil removal alone as a treatment for ear infections.5
If your child has a ruptured eardrum, keep water from getting in the ear when your child takes a bath or a shower or goes swimming. The ear could get infected if any germs in the water get into the ear. If your doctor says it’s okay, your child may use earplugs. Or your doctor may have other advice for you. He or she can tell you when the hole in the eardrum has healed and when it’s okay to go back to regular water activities.
If a ruptured eardrum hasn't healed in 3 to 6 months, your child may need surgery (myringoplasty or tympanoplasty) to close the hole. This surgery is rarely done, because the eardrum usually heals on its own within a few weeks. If a child has had many ear infections, you may delay surgery until the child is 6 to 8 years old to allow time for eustachian tube function to improve. At this point, there is a better chance that surgery will work.
If amoxicillin—the most commonly used antibiotic for ear infections—does not improve symptoms in 48 hours, your doctor may try a different antibiotic.
When taking antibiotics for ear infection, it is very important that your child take all of the medicine as directed, even if he or she feels better. Do not use leftover antibiotics to treat another illness. Misuse of antibiotics can lead to drug-resistant bacteria.
Most studies find that decongestants, antihistamines, and other nonprescription cold remedies usually do not help prevent or treat ear infections or fluid behind the eardrum.
Children who have fluid behind the eardrum longer than 3 months (chronic otitis media with effusion) may have trouble hearing and need a hearing test. If there is a hearing problem, your doctor may also prescribe antibiotics to help clear the fluid. But that usually doesn't help. The doctor might also suggest placing tubes in the ears to drain the fluid and improve hearing.
If your child is younger than 2, your doctor may not wait 3 months to start treatment because hearing problems at this age could affect your child's speaking ability. This is also why children in this age group are closely watched when they have ear infections.
Children who get rare but serious problems from ear infections, such as infection in the tissues around the brain and spinal cord (meningitis) or infection in the bone behind the ear (mastoiditis), need treatment right away.
You may be able to prevent your child from getting middle ear infections by:
Rest and care at home is often all children 2 years of age or older with ear infections need. Most ear infections get better without treatment. If your child is mildly ill and home treatment takes care of the earache, you may choose not to seek treatment for the ear infection.
At home, try:
Decongestants, antihistamines, expectorants, and other over-the-counter cold remedies usually do not work for treating or preventing ear infections. Antihistamines that cause sleepiness may thicken fluids, which can make your child feel worse. Check with the doctor before giving these medicines to your child. Experts say not to give decongestants to children younger than age 2.
If your child with an ear infection must take an airplane trip, talk with your doctor about how to help your child cope with ear pain during the trip.
If your child isn't better after a few days of home treatment, call your doctor.
If your child has a ruptured eardrum or has ear tubes in place, keep water from getting in the ear when your child takes a bath or a shower or goes swimming. The ear could get infected if any germs in the water get into the ear. If your doctor says it’s okay, your child may use earplugs. Or your doctor may have other advice for you. He or she can tell you when the hole in the eardrum has healed and when it’s okay to go back to regular water activities.
Antibiotics can treat ear infections. But most children with ear infections get better without them. If the care you give at home relieves pain, and a child's symptoms are getting better after a few days, you may not need antibiotics.
If your child has an ear infection and appears very ill, is younger than 2, or is at risk for complications from the infection, your doctor will likely give antibiotics right away. For children ages 2 and older, many doctors wait for a few days to see if the ear infection will get better on its own. When doctors do prescribe antibiotics, they most often use amoxicillin because it works well and costs less than other brands.
Experts suggest a hearing test if a child has had fluid behind his or her eardrum longer than 3 months. Normal hearing is critical during the first 2 years when your child is learning to talk. Your doctor may prescribe antibiotics to help clear the fluid. But that usually doesn't help. The doctor may also suggest placing tubes in the ears to drain fluid and improve hearing.
Other medicines that can treat symptoms of ear infection include:
Decongestants, antihistamines, expectorants, and other over-the-counter cold remedies usually do not work well for treating or preventing ear infections. Antihistamines that may make your child sleepy can thicken fluids and may actually make your child feel worse. Check with the doctor before giving these medicines to your child. Experts say not to give decongestants to children younger than 2.
Antibiotics may help cure ear infections caused by bacteria.
Some doctors prefer to treat all ear infections with antibiotics. Some things to consider before your child takes antibiotics include:
If your child still has symptoms (fever and earache) longer than 48 hours after starting an antibiotic, a different antibiotic may work better. Call your doctor if your child isn't feeling better after 2 days of antibiotic treatment.
Surgery for middle ear infections (acute otitis media) often means placing a drainage tube into the eardrum of one or both ears. It's one of the most common childhood operations. While the child is under general anesthesia, the surgeon cuts a small hole in the eardrum and inserts a small plastic tube in the opening (myringotomy or tympanostomy with tube placement).
The tubes will ventilate the middle ear after the fluid is gone. And they help relieve hearing problems.
Doctors consider tube placement for children who have had repeat infections or fluid behind the eardrum in both ears for 3 to 4 months and have trouble hearing. Sometimes they consider tubes for a child who has fluid in only one ear but also has trouble hearing.
Inserting ear tubes (myringotomy or tympanostomy with tube placement) often restores hearing and helps prevent buildup of pressure and fluid in the middle ear.
Adenoid removal (adenoidectomy) or adenoid and tonsil removal (adenotonsillectomy) may help some children who have repeat ear infections or fluid behind the eardrum. Children younger than 4 don't usually have their adenoids taken out unless they have severe nasal blockage. Taking out the tonsils alone is not usually done unless a child has another reason to have them removed.
Most tubes stay in place for about 6 to 12 months, after which they usually fall out on their own. After the tubes are out, the hole in the eardrum usually closes in 3 to 4 weeks. Some children need tubes put back in their ears because fluid behind the eardrum returns.
In rare cases, tubes may scar the eardrum and lead to permanent hearing loss.
Doctors suggest tubes if fluid behind the eardrum or ear infections keep coming back. Learn the pros and cons of this surgery. Before deciding, ask how the surgery can help or hurt your child and how much it will cost.
Surgeons sometimes operate to close a ruptured eardrum that hasn't healed in 3 to 6 months, though this is rare. The eardrum usually heals on its own within a few weeks.
If your child has a ruptured eardrum or has ear tubes in place, your doctor will recommend ear protection, including caution with water. The ear could get infected if any germs in the water get into the ear. If your doctor says it’s okay, your child may use earplugs. Or your doctor may have other advice for you. He or she can tell you when the hole in the eardrum has healed and when it’s okay to go back to regular water activities.
Allergy treatment can help children who have allergies and who also have frequent ear infections. Allergy testing isn't suggested unless children have signs of allergies.
Some people use herbal remedies, such as echinacea and garlic oil capsules, to treat ear infections. There is no scientific evidence that these therapies work. If you are thinking about using these therapies for your child's ear infection, talk with your doctor.
|Centers for Disease Control and Prevention|
|1600 Clifton Road|
|Atlanta, GA 30333|
The Get Smart Web site at the Centers for Disease Control and Prevention (CDC) provides information for both consumers and health professionals on the appropriate use of antibiotics. The Web site explains the dangers of inappropriate use of antibiotics and gives tips on actions people can take to feel better if they have an infection that cannot be helped by antibiotics. Some materials are available in English and in Spanish.
|American Academy of Family Physicians|
|P.O. Box 11210|
|Shawnee Mission, KS 66207-1210|
The American Academy of Family Physicians offers information on adult and child health conditions and healthy living. Its Web site has topics on medicines, doctor visits, physical and mental health issues, parenting, and more.
|American Academy of Otolaryngology—Head and Neck Surgery (AAO-HNS)|
|1650 Diagonal Road|
|Alexandria, VA 22314-2857|
The American Academy of Otolaryngology—Head and Neck Surgery (AAO-HNS) is the world's largest organization of physicians dedicated to the care of ear, nose, and throat (ENT) disorders. Its Web site includes information for the general public on ENT disorders.
|American Academy of Pediatrics|
|141 Northwest Point Boulevard|
|Elk Grove Village, IL 60007-1098|
The American Academy of Pediatrics (AAP) offers a variety of educational materials about parenting, general growth and development, immunizations, safety, disease prevention, and more. AAP guidelines for various conditions and links to other organizations are also available.
|KidsHealth for Parents, Children, and Teens|
|10140 Centurion Parkway North|
|Jacksonville, FL 32256|
This website is sponsored by the Nemours Foundation. It has a wide range of information about children's health, from allergies and diseases to normal growth and development (birth to adolescence). This website offers separate areas for kids, teens, and parents, each providing age-appropriate information that the child or parent can understand. You can sign up to get weekly emails about your area of interest.
|National Institute on Deafness and Other Communication DisordersNational Institutes of Health|
|31 Center Drive, MSC 2320|
|Bethesda, MD 20892-2320|
The National Institute on Deafness and Other Communication Disorders, part of the U.S. National Institutes of Health, advances research in all aspects of human communication and helps people who have communication disorders. The website has information about hearing, balance, smell, taste, voice, speech, and language.
- Kelley PE, et al. (2009). Ear, nose, and throat. In WW Hay et al., eds., Current Diagnosis and Treatment: Pediatrics, 19th ed., pp. 437–470. New York: McGraw-Hill.
- American Academy of Pediatrics and American Academy of Family Physicians (2004). Clinical practice guideline: Diagnosis and management of acute otitis media. Pediatrics, 113(5): 1451–1465.
- Weinberger PM, Terris DJ (2010). Otitis media section of Otolaryngology-Head and neck surgery. In GM Doherty, ed., Current Diagnosis and Treatment: Surgery, 13th ed., pp. 228–229. New York: McGraw-Hill.
- Macfadyen CA, et al. (2006). Systemic antibiotics versus topical treatments for chronically discharging ears with underlying eardrum perforations. Cochrane Database of Systematic Reviews (1). Oxford: Update Software.
- Rovers MM, et al. (2004). Otitis media. Lancet, 363(9407): 465–473.
- Pneumococcal vaccine (Prevnar) for otitis media (2003). Medical Letter on Drugs and Therapeutics, 45 (W1153B): 27–28.
Other Works Consulted
- Bradley-Stevenson C, et al. (2007). AOM in children (acute), search date January 2007. Online version of BMJ Clinical Evidence: http://www.clinicalevidence.com.
- Glasziou PP, et al. (2004). Antibiotics for acute otitis media in children. Cochrane Database of Systematic Reviews (1). Oxford: Update Software.
- Kerschner JE (2007). Otitis media. In RM Kliegman et al., eds., Nelson Textbook of Pediatrics, 18th ed., pp. 2632–2646. Philadelphia: Saunders Elsevier.
- Klein JO, Bluestone CD (2009). Otitis media. In RD Feigin et al., eds., Feigin and Cherry's Textbook of Pediatric Infectious Diseases, 6th ed., vol. 1, pp. 216–236. Philadelphia: Saunders Elsevier.
- Yates PD, Anari S (2008). Otitis media. In AK Lalwani, ed., Current Diagnosis and Treatment in Otolaryngology—Head and Neck Surgery, pp. 655–665. New York: McGraw-Hill.
|Primary Medical Reviewer||Michael J. Sexton, MD - Pediatrics|
|Specialist Medical Reviewer||Charles M. Myer, III, MD - Otolaryngology|
|Last Revised||May 9, 2011|
Last Revised: May 9, 2011
To learn more visit Healthwise.org
© 1995-2012 Healthwise, Incorporated. Healthwise, Healthwise for every health decision, and the Healthwise logo are trademarks of Healthwise, Incorporated. | http://www.sanfordhealth.org/HealthInformation/Healthwise/Topic/hw184385 | 13 |
16 | Malaria is a serious condition that is common in some tropical countries. It is important that you take measures to reduce your risk of infection when you travel to these areas. This leaflet gives general information about malaria and how to avoid it. You should always see a doctor or nurse before travelling to a country with a malarial risk. They are provided with up-to-date information about the best antimalarial medication for each country.
What is malaria?
Malaria is a serious infection. It is common in tropical countries such as parts of Africa, Asia and South America. Malaria is a disease caused by a parasite (germ) called Plasmodium that lives in mosquitoes. The parasite is passed to humans from a mosquito bite.
There are four types of plasmodium that cause malaria. These are called Plasmodium falciparum, Plasmodium vivax, Plasmodium ovale and Plasmodium malariae. Most cases of malaria brought into the UK are due to Plasmodium falciparum. This type of malaria is also the most likely to result in severe illness and/or death.
Most infections occur in travellers returning to the UK (rather than visitors coming to the UK). The risk of getting malaria is greatest if you do not take antimalarial medication or do not take it properly. People who take last-minute holidays and those visiting friends or relatives abroad have been shown to be the least likely to take antimalarial medication.
Each year around 1,700 people in the UK develop malaria which has been caught whilst abroad. Seven people died from malaria in the UK in 2010. Malaria can kill people very quickly if it is not diagnosed promptly.
The most common symptom of malaria is a high fever. Malaria can also cause muscle pains, headaches, diarrhoea and a cough.
Note: if you feel unwell and have recently visited an area in which there is malaria, you should seek prompt medical advice, even if you have taken your antimalarial medication correctly.
See separate leaflet called 'Malaria' for more detail.
Preventing malaria - four steps
There is an ABCD for prevention of malaria. This is:
- A wareness of risk of malaria.
- B ite prevention.
- C hemoprophylaxis (taking antimalarial medication exactly as prescribed).
- Prompt D iagnosis and treatment.
Awareness of the risk of malaria
The risk varies between countries and the type of trip. For example, back-packing or travelling to rural areas is generally more risky than staying in urban hotels. In some countries the risk varies between seasons - malaria is more common in the wet season. The main type of parasite, and the amount of resistance to medication, varies in different countries. Although risk varies, all travellers to malaria-risk countries should take precautions to prevent malaria.
The mosquitoes which transmit malaria commonly fly from dusk to dawn and therefore evenings and nights are the most dangerous time for transmission.
You should use an effective insect repellent to clothing and any exposed skin. Diethyltoluamide (DEET) is safe and the most effective insect repellent and can be sprayed on to clothes. It lasts up to three hours for 20%, up to six hours for 30% and up to 12 hours for 50% DEET. There is no further increase in duration of protection beyond a concentration of 50%. When both sunscreen and DEET are required, DEET should be applied after the sunscreen has been applied. DEET can be used on babies and children over two months of age. In addition, DEET can be used, in a concentration of up to 50%, if you are pregnant. It is also safe to use if you are breast-feeding.
If you sleep outdoors or in an unscreened room, you should use mosquito nets impregnated with an insecticide (such as pyrethroid). The net should be long enough to fall to the floor all round your bed and be tucked under the mattress. Check the net regularly for holes. Nets need to be re-impregnated with insecticide every six to twelve months (depending on how frequently the net is washed) to remain effective. Long-lasting nets, in which the pyrethroid is incorporated into the material of the net itself, are now available and can last up to five years.
If practical, you should try to cover up bare areas with long-sleeved, loose-fitting clothing, long trousers and socks - if you are outside after sunset - to reduce the risk of mosquitoes biting. Clothing may be sprayed or impregnated with permethrin, which reduces the risk of being bitten through your clothes.
Sleeping in an air-conditioned room reduces the likelihood of mosquito bites, due to the room temperature being lowered. Doors, windows and other possible mosquito entry routes to sleeping accommodation should be screened with fine mesh netting. You should spray the room before dusk with an insecticide (usually a pyrethroid) to kill any mosquitoes that may have come into the room during the day. If electricity is available, you should use an electrically heated device to vaporise a tablet containing a synthetic pyrethroid in the room during the night. The burning of a mosquito coil is not as effective.
Herbal remedies have not been tested for their ability to prevent or treat malaria and are therefore not recommended. Likewise, there is no scientific proof that homoeopathic remedies are effective in either preventing or treating malaria, and they are also not recommended.
Antimalarial medication (chemoprophylaxis)
Antimalarial medication helps to prevent malaria. The best medication to take depends on the country you visit. This is because the type of parasite varies between different parts of the world. Also, in some areas the parasite has become resistant to certain medicines.
There is a possibility of antimalarials that you may buy in the tropics or over the Internet, being fake. It is therefore recommended that you obtain your antimalarial treatment from your doctor's surgery, a pharmacist or a travel clinic. Medications to protect against malaria are not funded by the NHS. You will need to buy them, regardless of where you obtain them.
The type of medication advised will depend upon the area you are travelling to. It will also depend on any health problems you have, any medication you are currently taking, the length of your stay, and also any problems you may have had with antimalarial medication in the past.
You should seek advice for each new trip abroad. Do not assume that the medication that you took for your last trip will be advised for your next trip, even to the same country. There is a changing pattern of resistance to some medicines by the parasites. Doctors, nurses, pharmacists and travel clinics are updated regularly on the best medication to take for each country.
You must take the medication exactly as advised. This usually involves starting the medication up to a week or more before you go on your trip. This allows the level of medicine in your body to become effective. It also gives time to check for any side-effects before travelling. It is also essential that you continue taking the medication for the correct time advised after returning to the UK (often for four weeks). The most common reason for malaria to develop in travellers is because the antimalarial medication is not taken correctly. For example, some doses may be missed or forgotten, or the tablets may be stopped too soon after returning from the journey.
What are the side-effects with antimalarial medication?
Antimalarial medication is usually well tolerated. The most common side-effects are minor and include nausea (feeling sick) or diarrhoea. However, some people develop more severe side-effects. Therefore, always read the information sheet which comes with a particular medicine for a list of possible side-effects and cautions. Usually, it is best to take the medication after meals to reduce possible side-effects.
If you are taking doxycycline then you need to use a high-factor sunscreen. This is because this medication makes the skin more sensitive to the effects of the sun.
Around 1 in 20 people taking mefloquine may develop headaches or have problems with sleep.
Note: medication is only a part of protection against malaria. It is not 100% effective and does not guarantee that you will not get malaria. The advice above on avoiding mosquito bites is just as important, even when you are taking antimalarial medication.
Symptoms of malaria (to help with prompt diagnosis)
Symptoms are similar to flu. They include fever, shivers, sweating, backache, joint pains, headache, vomiting, diarrhoea and sometimes delirium. These symptoms may take a week or more to develop after you have been bitten by a mosquito. Occasionally, it takes a year for symptoms to develop.
This means that you should suspect malaria in anyone with a feverish illness who has travelled to a malaria-risk area within the past year, especially in the previous three months.
- Pregnant women are at particular risk of severe malaria and should, ideally, not go to malaria-risk areas. Full discussion with a doctor is advisable if you are pregnant and intend to travel. Most antimalarial medications are thought to be safe to the unborn child. Some, such as mefloquine, should be avoided in the first twelve weeks of pregnancy.
- Non-pregnant women taking mefloquine should avoid becoming pregnant. You should continue with contraception for three months after the last dose.
- If you have epilepsy, kidney failure, some forms of mental illness, and some other uncommon illnesses, you may have a restricted choice of antimalarial medication. This may be due to your condition, or to possible interactions with other medication that you may be taking.
- If you do not have a spleen (if you have had it removed) or your spleen does not work well, then you have a particularly high risk of developing severe malaria. Ideally, you should not travel to a malaria-risk country. However, if travel is essential, every effort should be made to avoid infection and you should be very strict about taking your antimalarial medication.
- Travellers going to remote places far from medical facilities sometimes take emergency medication with them. This can be used to treat suspected malaria until proper medical care is available.
Further reading & references
- Guidelines for malaria prevention in travellers from the United Kingdom, Health Protection Agency (January 2007)
- Malaria, National Travel Health Network and Centre (NaTHNaC)
- Malaria Fact Sheet No 94, World Health Organization, 2010
- Chiodini J; The standard of malaria prevention advice in UK primary care. Travel Med Infect Dis. 2009 May;7(3):165-8. Epub 2009 Mar 21.
- Lalloo DG, Hill DR; Preventing malaria in travellers. BMJ. 2008 Jun 14;336(7657):1362-6.
|Original Author: Dr Tim Kenny||Current Version: Dr Laurence Knott||Peer Reviewer: Dr Tim Kenny|
|Last Checked: 15/12/2011||Document ID: 4416 Version: 41||© EMIS|
Disclaimer: This article is for information only and should not be used for the diagnosis or treatment of medical conditions. EMIS has used all reasonable care in compiling the information but make no warranty as to its accuracy. Consult a doctor or other health care professional for diagnosis and treatment of medical conditions. For details see our conditions. | http://www.patient.co.uk/health/Malaria-Prevention.htm | 13 |
24 | Curriculum & Resources: Individual and Community Resilience
Great Resources for Teaching
from the October 2010 YES! Education Connection Newsletter
Read the newsletter: Go Green! Go Simple! Preparing your students for an uncertain world
What makes teenage brains unique? What happens when people from all walks of life play an alternate reality game to create a better future? Here are two classroom resources that will inspire your students to explore individual and community resilience.
Inside the Teenage Brain
Teenagers can be a mystery. One minute, they’re sweet, earnest, and on task. Then, snarly, evasive, and bouncing off the walls the next.
Frontline’s series “ Inside the Teenage Brain” explores scientific research and explanations for teenage behavior. Neuroscientists say the brain is like a house that is built in the early years, and the rest of childhood and teenage years is getting the furniture in the house and in the right place. Extensive changes in brain development—referred to as pruning and strengthening—during puberty occur simultaneously with raging hormones.
Sleep, mood swings, risky behavior, neuroresearch, public policy, and parenting tips are deftly discussed in this fascinating and helpful program. No matter the research, the experts on the show say that the biggest difference in a teen’s life is the quality time he or she spends with parents or other adults.
Episodes include: Teenagers Inexplicable Behavior, The Wiring of the Adolescent Brain, Mood Swings, You Just Don't Understand, From Zzzzs to A's, and Are There Lessons for Parents?
To enter the Frontline series and recesses of the teen brain, click here: Inside the Teenage Brain
EXPLORE: Anatomy of a Teen Brain
NY TIMES LESSON PLAN: What Were They Thinking? Exploring Teen Brain Development
In this lesson, your students will review recent scientific research on the teenage brain, including the Frontline series, “Inside the Teenage Brain,” and hold a mini-symposium to discuss its implications to topics related to teens’ freedom and accountability. Your students will note differences between adult and teen brains; what methods neuroscientists use to research those differences; and how that research is applied to parenting (think curfews and hanging out with friends) and public policy, such as teenage driver laws.LESSON PLAN: What Were They Thinking? Exploring Teen Brain Development
- Discovering the Beauty of Teenagers
Photographer John Hasyn held a photo workshop with Inuit youth from Nunavet. His experience overcame his fear of teenagers and changed his perspective forever.
- Project Happiness: 7 Doors Project
Explore the idea of real and lasting happiness for teenagers.
- This is Your Brain on Bliss
After 2,000 years of practice, Buddhist monks know that one secret to happiness is simply to put your mind to it.
World Without Oil
In May 2007, people from all walks of life began to play a “what if” game. What if an oil crisis started? What would happen? How would the lives of ordinary people change?
To play the game, people visualized what would happen if an oil crisis hit the U.S. As the game unfolded and the crisis was in full swing, people told their stories of how the oil shortage affected their lives and what they were doing to cope. As World Without Oil continued, over 1900 people not only created an immensely complex disaster, but they also visualized realistic and achievable solutions via their own personal blog posts, videos, and voicemails.
Though the game is officially over, your students can still play and learn. World Without Oil’s 11 stand-alone lessons and grassroots simulation will engage students with questions about energy use, sustainability, the role energy plays in our economy, culture, worldview and history, and the threat of peak oil.
LESSON 1: Oil Crisis: Get into the GameA global oil crisis has begun. Oil usage worldwide has increased to where the oil supply can only meet 95% of it. Begin the inquiry into the effects of less oil in our lives.
EXPLORE: Lesson One: Oil Crisis
LESSON 3: Life is Starting to Change
Widespread changes are starting. Goods and services that depended on cheap oil are failing.
EXPLORE: Lesson Three: Life is Starting to Change
LESSON 6: Food Without Oil
The impact of oil on our food supply is one of the most serious aspects
of the oil crisis. Shortages are forcing many people to look for
locally grown food.
To download all 11 World Without Oil lessons, in addition to a student guide on lessons, click here: http://worldwithoutoil.org/metateachers.htm
World Without Oil is an alternate reality game created to call attention to and spark dialogue about petroleum dependency. It also aims to inspire individuals to take steps toward living less oil-dependent, more resilient lives. World Without Oil was presented in 2007 by Independent Television Service (ITVS) with funding by the Corporation for Public Broadcasting. It continues through lesson plans for middle and high school teachers.
To explore more learning resources, visit the official website: World Without Oil
The above resources accompany the October 2010 YES! Education Connection Newsletter
READ NEWSLETTER: Go Green! Go Simple! Preparing your students for an uncertain world
That means, we rely on support from our readers.
Independent. Nonprofit. Subscriber-supported. | http://www.yesmagazine.org/for-teachers/curriculum/curriculum-resources-world-without-oil?icl=yesemail_ednews_oct10&ica=tnBrain | 13 |
42 | The balance sheet is one of four financial statements. It shows the financial position of a company as of the date issued. It lists a company's assets (e.g. cash, inventory, etc.) and its liabilities (e.g. debt, accounts payable, etc.) and shareholders' equity. Unlike the other financial statements, it is accurate only at one moment in time, not a period of time.
The balance sheet is the core of the financial statements. All other statements either feed into or are derived from the balance sheet. The income statement shows how the company's assets were used to generate revenue and income. The statement of cash flows shows how the cash balance changed over time and accounts for changes in various assets and liabilities. The statement of shareholders' equity shows how the equity portion of the balance sheet changed since the last one. Many analysts come to the balance sheet first to gauge the health of the company. It is often listed first on the quarterly or annual reports.
The basic equation of accounting is reflected in the balance sheet.
<math>Assets = Liabilities + equity</math>
If you look at a balance sheet, you'll note that the total assets always equals the total of liabilities and equity. This reflects what the company owns (assets) and how what it owns came about, through the funding given it by liabilities (borrowings) and equity.
The balance sheet is either laid out in a side-to-side manner, with the assets on the left and liabilities and equity on the right, or in a vertical manner, with assets listed first, then liabilities, then equity.
Assets are listed in order of liquidity, starting with current assets (those which can be converted into cash within one full reporting cycle, usually one year) and starting those with cash, the most liquid of assets. As one moves down through the list, one comes across less liquid assets, such as:
- accounts receivable which must be collected from customers before they are cash, and
- inventory which must be converted into goods and / or sold before they become cash.
Liabilities are listed in order of when they come due, starting with those due within an accounting period (usually one year), such as accounts payable and the portion of long term debt due within that period. Long term liabilities include borrowings from banks, bonds issued,
Finally, shareholder equity, is given. This includes:
- retained earnings (earnings not paid out as dividends or used to repurchase shares),
- stock at par value (the stated value of stock, such as $0.02 per share),
- additional paid in capital (what was paid to the company for its shares in excess of par value), and
- treasury stock (stock repurchased by the company on the open market, a negative number).
Things to remember
- Read the footnotes, as many, if not all, of the line items in the balance sheet are expanded upon with more detail there.
- Not all debt a company may be liable for will show up on the balance sheet. Always remember Enron!
- Different industries have different balance sheets, financial institutions being the most prominent example. Banks, for instance, show the deposits from their customers as a liability (which it is, the bank owes that money to the customers) and loans issued as assets. Both of these are debt obligations running in opposite directions, and belong in different portions of the balance sheet.
- Book value is a synonym for equity and is the "net worth" of the company (what it has left after all liabilities are paid from all assets -- go back to the equation above and solve for equity). However, if there is a lot of goodwill as part of the assets, well, you can't spend goodwill, so it's an "intangible" asset. Tangible book value removes goodwill and other intangible assets (such as intellectual property like patents) from the assets before subtracting out liabilities and is a stricter (more conservative) look at the net worth of the company.
Related Fool Articles
- Foolish Fundamentals: The Balance Sheet
- Understanding a Bank's Balance Sheet - How a bank's balance sheet differs from that of typical companies
- Accounts payable
- Accounts receivable
- Cash flow statement
- Income statement
- Statement of shareholders' equity | http://wiki.fool.com/wiki/index.php?title=Balance_sheet&oldid=20876 | 13 |
17 | 1929: A Turning Point During the Weimar Republic
It is 1929 and the misery that had aided the efforts of Weimar’s enemies in the early 20s has been relieved by five years of economic growth and rising incomes. Germany has been admitted to the League of Nations and is once more an accepted member of the international community. Certainly the bitterness at Germany's defeat in the Great War and the humiliation of the Treaty of Versailles have not been forgotten but most Germans appear to have come to terms with the new Republic and its leaders.
Gustav Stresemann has just died. Germany has, in part, as a result of his efforts become a respected member of the international community again. Stresemann often spoke before the League of Nations. With his French and American counterparts Auguste Briand and Frank Kellog he had helped negotiate the Paris Peace pact which bore the name of his fellow diplomats Kellog-Briand. Once again Gustav Stresemann had decided to take on the arduous job of leading a battle for a policy he felt was in his nation’s vital interest even though he was tired and ill and knew that the opposition would be stubborn and vitriolic. Stresemann was the major force in negotiating and guiding the Young Plan through a plebiscite. This plan although opposed by those on the right-wing won majority approval and further reduced Germany’s reparations payments.
How had Weimar Germany become by 1929 a peaceful relatively prosperous and creative society given its chaotic and crisis-ridden beginnings? What significant factors contributed to the survival and success of the Republic? What were the Republic’s vulnerabilities, which would allow its enemies to undermine it in the period between 1929 and 1933?
The Weimar Republic was a bold experiment. It was Germany's first democracy, a state in which elected representatives had real power. The new Weimar constitution attempted to blend the European parliamentary system with the American presidential system. In the pre- World War I period, only men twenty-five years of age and older had the right to vote, and their elected representatives had very little power. The Weimar constitution gave all men and women twenty years of age the right to vote. Women made up more than 52% of the potential electorate, and their support was vital to the new Republic. From a ballot, which often had thirty or more parties on it, Germans chose legislators who would make the policies that shaped their lives. Parties spanning a broad political spectrum from Communists on the far left to National Socialists (Nazis) on the far right competed in the Weimar elections. The Chancellor and the Cabinet needed to be approved by the Reichstag (legislature) and needed the Reichstag's continued support to stay in power.
Although the constitution makers expected the Chancellor to be the head of government, they included emergency provisions that would ultimately undermine the Republic. Gustav Stresemann was briefly Chancellor in 1923 and for six years foreign minister and close advisor to Chancellors. The constitution gave emergency powers to the directly elected President and made him the Commander-in-Chief of the armed forces. In times of crisis, these presidential powers would prove decisive. During the stable periods, Weimar Chancellors formed legislative majorities based on coalitions primarily of the Social Democrats, the Democratic Party, and the Catholic Center Party, all moderate parties that supported the Republic. However, as the economic situation deteriorated in 1930, and many disillusioned voters turned to extremist parties, the Republic's supporters could no longer command a majority. German democracy could no longer function as its creators had hoped. Ironically by 1932, Adolf Hitler, a dedicated foe of the Weimar Republic, was the only political leader capable of commanding a legislative majority. On January 30, 1933, an aged President von Hindenburg reluctantly named Hitler Chancellor of the Republic. Using his legislative majority and the support of Hindenburg's emergency presidential powers, Hitler proceeded to destroy the Weimar Republic.
Germany emerged from World War I with huge debts incurred to finance a costly war for almost five years. The treasury was empty, the currency was losing value, and Germany needed to pay its war debts and the huge reparations bill imposed on it by the Treaty of Versailles, which officially ended the war. The treaty also deprived Germany of territory, natural resources, and even ships, trains, and factory equipment. Her population was undernourished and contained many impoverished widows, orphans, and disabled veterans. The new German government struggled to deal with these crises, which had produced a serious hyperinflation. By 1924, after years of crisis management and attempts at tax and finance reform, the economy was stabilized with the help of foreign, particularly American, loans. A period of relative prosperity prevailed from 1924 to 1929. This relative "golden age" was reflected in the strong support for moderate pro-Weimar political parties in the 1928 elections. However, economic disaster struck with the onset of the world depression in 1929. The American stock market crash and bank failures led to a recall of American loans to Germany. This development added to Germany's economic hardship. Mass unemployment and suffering followed. Many Germans became increasingly disillusioned with the Weimar Republic and began to turn toward radical anti-democratic parties whose representatives promised to relieve their economic hardships.
Rigid class separation and considerable friction among the classes characterized pre-World War I German society. Aristocratic landowners looked down on middle and working class Germans and only grudgingly associated with wealthy businessmen and industrialists. Members of the middle class guarded their status and considered themselves to be superior to factory workers. The cooperation between middle and working class citizens, which had broken the aristocracy's monopoly of power in England, had not developed in Germany. In Weimar Germany, class distinctions, while somewhat modified, were still important. In particular, the middle class battled to preserve their higher social status and monetary advantages over the working class. Ruth Fischer wanted her German Communist party to champion the cause of the unemployed and unrepresented.
Gender issues were also controversial as some women's groups and the left-wing political parties attempted to create more equality between the sexes. Ruth Fischer struggled to keep the Communist party focused on these issues. As the Stalinists forced her out of the party the Communists lost this focus. Other women's groups, conservative and radical right-wing political parties, and many members of the clergy resisted the changes that Fischer and her supporters advocated. The constitution mandated considerable gender equality, but tradition and the civil and criminal codes were still strongly patriarchal and contributed to perpetuating inequality. Marriage and divorce laws and questions of morality and sexuality were all areas of ferment and debate.
Weimar Germany was a center of artistic innovation, great creativity, and considerable experimentation. In film, the visual arts, architecture, craft, theater, and music, Germans were in the forefront of the most exciting developments. The unprecedented freedom and widespread latitude for varieties of cultural expression led to an explosion of artistic production. In the Bauhaus arts and crafts school, in the studios of the film company UFA, in the theater of Max Rinehardt and the studios of the New Objectivity (Neue Sachlickeit) artists, cutting edge work was being produced. While many applauded these efforts, conservative and radical right-wing critics decried the new cultural products as decadent and immoral. They condemned Weimar Germany as a new Sodom and Gomorrah and attacked American influences, such as jazz music, as contributors to the decay.
Weimar Germany had a population that was about 65% Protestant, 34 % Catholic and 1%Jewish. After German unification in 1871, the government had strongly favored the two major Protestant Churches, Lutheran and Reformed, which thought of themselves as state-sponsored churches. At the same time, the government had harassed and restricted the Catholic Church. Although German Catholics had only seen restrictions slowly lifted in the pre-World War I period, they nevertheless demonstrated their patriotism in World War I. German Jews, who had faced centuries of persecution and restriction, finally achieved legal equality in 1871. Jews also fought in record numbers during World War I and many distinguished themselves in combat. Antisemites refused to believe the army’s own figures and records and accused the Jews of undermining the war effort. The new legal equality of the Weimar period did not translate into social equality, and the Jews remained the "other" in Germany.
Catholics and Jews both benefited from the founding of the Weimar Republic. Catholics entered the government in leadership positions, and Jews participated actively in Weimar cultural life. Many Protestant clergymen resented the loss of their privileged status. While many slowly accepted the new Republic, others were never reconciled to it. Both Protestant and Catholic clergy were suspicious of the Socialists who were a part of the ruling group in Weimar and who often voiced Marxist hostility toward religion. Conflicts over religion and education and religion and gender policies were often intense during the Weimar years. The growth of the Communist Party in Germany alarmed Protestant and Catholic clergy, and the strong support the Catholic Center Political Party had given to the Republic weakened in the last years of the Republic. While Jews had unprecedented opportunities during the Weimar period, their accomplishments and increased visibility added resentment to long-standing prejudices and hatreds and fueled a growing antisemitism.
Stresemann portrait: Deutsches Bundesarchiv (German Federal Archive) | http://weimar.facinghistory.org/content/1929-turning-point-during-weimar-republic | 13 |
37 | When scientists first began using rockets for research, their eyes were focused upward, on the mysteries that lay beyond our atmosphere and our planet. But it wasn't long before they realized that this new technology could also give them a unique vantage point from which to look back at Earth.
Scientists working with V-2 and early sounding rockets for the Naval Research Laboratory (NRL) made the first steps in this direction almost ten years before Goddard was formed. The scientists put aircraft gun cameras on several rockets in an attempt to determine which way the rockets were pointing. When the film from one of these rockets was developed, it had recorded images of a huge tropical storm over Brownsville, Texas. Because the rocket....
....was spinning, the image wasn't a neat, complete picture, but Otto Berg, the scientist who had modified the camera to take the photo, took the separate images home and pasted them together on a flat board. He then took the collage to Life magazine, which published what was arguably one of the earliest weather photos ever taken from space.1
Space also offered unique possibilities for communication that were recognized by industry and the military several years before NASA was organized. Project RAND2 had published several reports in the early 1950s outlining the potential benefits of satellite-based communication relays, and both AT&T and Hughes had conducted internal company studies on the commercial viability of communication satellites by 1959.3
These rudimentary seeds, already sown by the time Goddard opened its doors, grew into an amazing variety of communication, weather, and other remote-sensing satellite projects at the Center that have revolutionized many aspects of our lives. They have also taught us significant and surprising things about the planet we inhabit. Our awareness of large-scale crop and forest conditions, ozone depletion, greenhouse warming, and El Nino weather patterns has increased dramatically because of our ability to look back on Earth from space. Satellites have allowed us to measure the shape of the Earth more accurately, track the movement of tectonic plates, and analyze portions of the atmosphere and areas of the world that are hard to reach from the ground.
In addition, the "big picture" perspective satellites offer has allowed scientists to begin investigating the dynamics between different individual processes and the development and behavior of global patterns and systems. Ironically, it seems we have had to develop the ability to leave our planet before we could begin to fully understand it.
From the very earliest days of the space program, scientists realized that satellites could offer an important side-benefit to researchers interested in mapping the gravity field and shape of the Earth, and Goddard played an important role in this effort. The field of geodesy, or the study of the gravitational field of the Earth and its relationship to the solid structure of the planet, dates back to the third century B.C., when the Greek astronomer Eratosthenes combined astronomical observation with land measurement to try to prove that the Earth was, in fact, round. Later astronomers and scientists had used other methods of triangulation to try to estimate the exact size of the Earth. Astronomers also had used the Moon, or stars with established locations, to try to map the shape of the Earth and exact distances between points more precisely. But satellites offered a new twist to this methodology.
For one thing, the Earth's shape and gravity field affected the orbit of satellites. So at the beginning of the space age, Goddard's tracking and characterizing the orbit of the first satellites was in and of itself a scientific endeavor. From that orbital data, scientists could infer information about the Earth's gravity field, which is affected by the distribution of its mass. The Earth, as it turns out, is not perfectly round, and its mass is not perfectly distributed. There are places where land or ocean topography results in denser or less dense mass accumulation. The centrifugal force of the Earth's rotation combines with gravity and these mass concentrations to create bulges and depressions in the planet. In fact, although we think of the Earth as round, Goddard's research showed us that it is really slightly pear-shaped.
Successive Goddard satellites enabled scientists to gather much more precise information about the Earth's shape as well as exact positions of points on the planet. In fact, within 10 years, scientists had learned as much again about global positioning, the size and shape of the Earth, and its gravity field as their predecessors had learned in the previous 200 years.
Laser reflectors on Goddard satellites launched in 1965, 1968, and 1976, for example, allowed scientists to make much more precise measurements between points, which enabled them to determine the exact location or movement of objects. The laser reflectors developed for Goddard's LAGEOS satellite, launched in 1976, could determine movement or position within a few centimeters, which allowed scientists to track and analyze tectonic plate movement and continental drift. Among other things, the satellite data told scientists that the continents seem to be inherently rigid bodies, even if they contain divisive bodies of water, such as the Mississippi River, and that continental plate movement appears to occur at a constant rate over time. Plate movement information provided by satellites has also helped geologists track the dynamics that lead up to Earthquakes, which is an important step in predicting these potentially catastrophic events.
The satellite positioning technique used for this plate tectonic research was the precursor to the Global Positioning System (GPS) technology that now uses a...
...constellation of satellites to provide precise three-dimensional navigation for aircraft and other vehicles. Yet although a viable commercial market is developing for GPS technology today, the greatest commercial application of space has remained the field of communication satellites.4
For all the talk about the commercial possibilities of space, the only area that has proven substantially profitable since 1959 is communication satellites, and Goddard played an important role in developing the early versions of these spacecraft. The industry managers who were conducting research studies and contemplating investment in this field in 1959 could not have predicted the staggering explosion of demand for communications that has accompanied the so-called "Information Age." But they saw how dramatically demand for telephone service had increased since World War II, and they saw potential in other communications technology markets, such as better or broader transmission for television and radio signals. As a result, several companies were even willing to invest their own money, if necessary, to develop communication satellites.
The Department of Defense (DoD) actually had been working on communication satellite technology for a number of years, and it wanted to keep control of what it considered a critical technology. So when NASA was organized, responsibility for communication satellite technology development was split between the new space agency and the DoD. The DoD would continue responsibility for "active" communication satellites, which added power to incoming signals and actively transmitted the signals back to ground stations. NASA's role was initially limited to "passive" communication satellites, which relied on simply reflecting signals off the satellite to send them back to Earth.5
NASA's first communication satellite, consequently, was a passive spacecraft called "Echo." It was based on a balloon design by an engineer at NASA's Langley Research Center and developed by Langley, Goddard, JPL and AT&T. Echo was, in essence, a giant mylar balloon, 100 feet in diameter, that could "bounce" a radio signal back down to another ground station a long distance away from the first one.
Echo I, the world's first communication satellite, was successfully put into orbit on 12 August 1960. Soon after launch, it reflected a pre-taped message from President Dwight Eisenhower across....
.....the country and other radio messages to Europe, demonstrating the potential of global radio communications via satellite. It also generated a lot of public interest, because the sphere was so large that it could be seen from the ground with the naked eye as it passed by overhead.
Echo I had some problems, however. The sphere seemed to buckle somewhat, hampering its signal-reflecting ability. So in 1964, a larger and stronger passive satellite, Echo II, was put into orbit. Echo II was made of a material 20 times more resistant to buckling than Echo I and was almost 40 feet wider in diameter.
Echo II also experienced some difficulties with buckling. But the main reason the Echo satellites were not pursued any further was not that the concept wouldn't work. It was simply that it was eclipsed by much better technology - active communication satellites.6
Syncom, Telstar, and Relay
By 1960, Hughes, RCA, and AT&T were all advocating the development of active communication satellites. They differed in the kind of satellite they recommended, however. Hughes felt strongly that the best system would be based on geosynchronous satellites. Geosynchronous satellites are in very high orbits - 22,300 miles above the ground. This high orbit allows their orbital speed to match the rotation speed of the Earth, which means they can remain essentially stable over one spot, providing a broad range of coverage 24 hours a day. Three of these satellites, for example, can provide coverage of the entire world, with the exception of the poles.
The disadvantage of using geosynchronous satellites for communications is that sending a signal up 22,300 miles and back causes a time-delay of approximately a quarter second in the signal. Arguing that this delay would be too annoying for telephone subscribers, both RCA and AT&T supported a bigger constellation of satellites in medium Earth orbit, only a few hundred miles above the Earth.7
The Department of Defense had been working on its own geosynchronous communication satellite, but the project was running into significant development problems and delays. NASA had been given permission by 1960 to pursue active communication satellite technology as well as passive systems, so the DoD approached NASA about giving Hughes a sole-source contract to develop an experimental geosynchronous satellite. The result was Syncom, a geosynchronous satellite design built by Hughes under contract to Goddard.
Hughes already had begun investing its own money and effort in the technology, so Syncom I was ready for Goddard to launch in February 1963 - only 17 months after the contract was awarded. Syncom I stopped sending signals a few seconds before it was inserted into its final orbit, but Syncom II was launched successfully five months later, demonstrating the viability of the system. The third Syncom satellite, launched in August 1964, transmitted live television coverage of the Olympic Games in Tokyo, Japan to stations in North America and Europe.
Although the military favored the geosynchronous concept, it was not the only technology being developed. In 1961, Goddard began working with RCA on the "Relay" satellite, which was launched 13 December 1962. Relay was designed to demonstrate the feasibility of medium-orbit, wide-band communications satellite technology and to help develop the ground....
....station operations necessary for such a system. It was a very successful project, transmitting even color television signals across wide distances.
AT&T, meanwhile, had run into political problems with NASA and government officials who were concerned that the big telecommunications conglomerate would end up monopolizing what was recognized as potentially powerful technology. But when NASA chose to fund RCA's Relay satellite instead of AT&T's design, AT&T decided to simply use its own money to develop a medium orbit communications satellite, which it called Telstar. NASA would launch the satellite, but AT&T would reimburse NASA for the costs involved. Telstar 1 was launched on 10 July 1962, and a second Telstar satellite followed less than a year later. Both satellites were very successful, and Telstar 2 demonstrated that it could even transmit both color and black and white television signals between the United States and Europe.
In some senses, Relay and Telstar were competitors. But RCA and AT&T, who were both working with managers at Goddard, reportedly cooperated very well with each other. Each of the efforts was seen as helping to advance the technology necessary for this new satellite industry to become viable, and both companies saw the potential profit of that in the long run.
By 1962, it was clear that satellite communications technology worked, and there was going to be money made in its use. Fearful of the powerful monopoly satellites could offer a single company, Congress passed the Satellite Communications Act, setting up a consortium of existing communications carriers to run the satellite communications industry. Individual companies could bid to sell satellites to the consortium, but no single company would own the system. NASA would launch the satellites for Comsat, as the consortium was called, but Comsat would run the operations.
In 1964, the Comsat consortium was expanded further with the formation of the International Telecommunications Satellite Organization, commonly known as "Intelsat," to establish a framework for international use of communication satellites. These organizations had the responsibility for choosing the type of satellite technology the system would use. The work of RCA, AT&T and Hughes had proven that either medium-altitude or geosynchronous satellites could work. But in 1965, the consortiums finally decided to base the international system on geosynchronous satellites similar to the Syncom design.8
Applications Technology Satellites
Having helped to develop the prototype satellites, Goddard stepped back from operational communication satellites and focused its efforts on developing advanced technology for future systems. Between 1966 and 1974, Goddard launched a total of six Applications Technology Satellites (ATS) to research advanced technology for communications and meteorological spacecraft. The ATS spacecraft were all put into geosynchronous orbits and investigated microwave and millimeter wavelengths for.....
....communication transmissions, methods for aircraft and marine navigation and communications, and various control technologies to improve geosynchronous satellites.
Four of the spacecraft were highly successful and provided valuable data for improving future communication satellites. The sixth ATS spacecraft, launched 30 May 1974, even experimented with transmitting health and education television to small, low-cost ground stations in remote areas. It also tested a geosynchronous satellite's ability to provide tracking and data transmission services for other satellites. Goddard's research in this area, and the expertise the Center developed in the process, made it possible for NASA to develop the Tracking and Data Relay Satellite System (TDRSS) the agency still uses today.9
After ATS-6, NASA transferred responsibility for future communication satellite research to the Lewis Research Center. Goddard, however, maintained responsibility for developing and operating the TDRSS tracking and data satellite system.10
Statistically, the United States has the world's most violent weather. In a typical year, the U.S. will endure some 10,000 violent thunderstorms, 5,000 floods, 1,000 tornadoes, and several hurricanes.11 Improving weather prediction, therefore, has been a high priority of meteorologists here for a very long time.
The early sounding rocket flights began to indicate some of the possibilities space flight might offer in terms of understanding and forecasting the weather, and they prompted the military to pursue development of a meteorological satellite. The Advanced Research Projects Agency (ARPA)12 had a group of scientists and engineers working on this project at the U.S. Army Signal Engineering Laboratories in Ft. Monmouth, New Jersey when NASA was first organized. Recognizing the country's history of providing weather services to the public through a civilian agency, the military agreed to transfer the research group to NASA. These scientists and engineers became one of the founding units of Goddard in 1958.
Television and Infrared Observation Satellites
These Goddard researchers were working on a project called the Television and Infrared Observation Satellite (TIROS). When it was launched on 1 April 1960, it became the world's first meteorological satellite, returning thousands of images of cloud cover and spiralling storm systems. Goddard's Explorer VI satellite had recorded some crude cloud cover images before TIROS I was launched, but the TIROS satellite was the first spacecraft dedicated to meteorological data gathering and transmitted the first really good cloud cover photographs. 13
Clearly, there was a lot of potential in this new technology, and other meteorological satellites soon followed the first TIROS spacecraft. Despite its name, the first TIROS carried only television cameras. The second TIROS satellite, launched in November 1960, also included an infrared instrument, which gave it the ability to detect cloud cover even at night.
The TIROS capabilities were limited, but the satellites still provided a tremendous service in terms of weather forecasting. One of the biggest obstacles meteorologists faced was the local, "spotty" nature of the data...
...they could obtain. Weather balloons and ocean buoys could only collect data in their immediate area. Huge sections of the globe, especially over the oceans, were dark areas where little meteorological information was available. This made forecasting a difficult task, especially for coastal areas.
Sounding rockets offered the ability to take measurements at all altitudes of the atmosphere, which helped provide temperature, density and water vapor information. But sounding rockets, too, were limited in the scope of their coverage. Satellites offered the first chance to get a "big picture" perspective on weather patterns and storm systems as they travelled around the globe.
Because weather forecasting was an operational task that usually fell under the management of the Weather Bureau, there was some disagreement about who should have responsibility for designing and operating this new class of satellite. Some people at Goddard felt that NASA should take the lead, because the new technology was satellite-based. The Weather Bureau, on the other hand, was going to be paying for the satellites and wanted control over the type of spacecraft and instruments they were funding. When the dust settled, it was decided that NASA would conduct research on advanced meteorological satellite technology and would manage the building, launching and testing of operational weather satellites. The Weather Bureau would have final say over operational satellite design, however, and would take over management of spacecraft operations after the initial test phase was completed.14
The TIROS satellites continued to improve throughout the early 1960s.
Although the spacecraft were officially research satellites, they also provided the Weather Bureau with a semi-operational weather satellite system from 1961 to 1965. TIROS III, launched in July 1961, detected numerous hurricanes, tropical storms, and weather fronts around the world that conventional ground networks missed or would not have seen for several more days.15 TIROS IX, launched in January 1965, was the first of the series launched into a polar orbit, rotating around the Earth in a north-south direction. This orientation allowed the satellite to cross the equator at the same time each day and provided coverage of the entire globe, including the higher latitudes and polar regions, as its orbit precessed around the Earth.
The later TIROS satellites also improved their coverage by changing the location of the spacecraft's camera. The TIROS satellites were designed like a wheel of cheese. The wheel spun around but, like a toy top or gyroscope, the axis of the wheel kept pointing in the same direction as the satellite orbited the Earth. The cameras were placed on the satellite's axis, which allowed them to take continuous pictures of the Earth when that surface was actually facing the planet. Like dancers doing a do-si-do, however, the surface with the cameras would be pointing parallel to or away from the Earth for more than half of the satellite's orbit. TIROS IX (and the operational TIROS satellites), put the camera on the rotating section of the wheel, which was kept facing perpendicular to the Earth throughout its orbit. This made the satellite operate more like a dancer twirling around while circling her partner. While the camera could only take pictures every few seconds, when the section of the wheel holding the camera rotated past the Earth, it could continue taking photographs throughout the satellite's entire orbit.
In 1964, Goddard took another step in developing more advanced weather satellites when it launched the first NIMBUS spacecraft. NASA had originally envisioned the larger and more sophisticated NIMBUS as the design for the Weather Bureau's operational satellites. The Weather Bureau decided that the....
....NIMBUS spacecraft were too large and expensive, however, and opted to stay with the simpler TIROS design for the operational system. So the NIMBUS satellites were used as research vehicles to develop advanced instruments and technology for future weather satellites. Between 1964 and 1978, Goddard developed and launched a total of seven Nimbus research satellites.
In 1965, the Weather Bureau was absorbed into a new agency called the Environmental Science Services Administration (ESSA). The next year, NASA launched the first satellite in ESSA's operational weather system. The satellite was designed like the TIROS IX spacecraft and was designated "ESSA 1." As per NASA's agreement, Goddard continued to manage the building, launching and testing of ESSA's operational spacecraft, even as the Center's scientists and engineers worked to develop more advanced technology with separate research satellites.
The ESSA satellites were divided into two types. One took visual images of the Earth with an an Automatic Picture Transmission (APT) camera system and transmitted them in real time to stations around the globe. The other recorded images that were recorded and then transmitted to a central ground station for global analysis. These first ESSA satellites were deployed in pairs in "Sun-synchronous" polar orbits around the Earth, crossing the same point at approximately the same time each day.
In 1970, Goddard launched an improved operational spacecraft for ESSA using "second generation" weather satellite technology. The Improved TIROS Operational System (ITOS), as the design was initially called, combined the functions of the previous pairs of ESSA satellites into a single spacecraft and added a day and night scanning radiometer. This improvement meant that meteorologists could get global cloud cover information every 12 hours instead of every 24 hours.
Soon after ITOS 1 was launched, ESSA evolved into the National Oceanic and Atmospheric Administration (NOAA), and successive ITOS satellites were redesignated as NOAA 1, 2, 3, etc. This designation system for NOAA's polar-orbiting satellites continues to this day.
In 1978, NASA launched the first of what was called the "third generation" of polar orbiting satellites. The TIROS-N design was a much bigger, three-axis-stabilized spacecraft that incorporated much more advanced equipment. The TIROS-N series of instruments, used aboard operational NOAA satellites today, provided much more accurate sea-surface temperature information, which is necessary to predict a phenomenon like an El Nino weather pattern. They also could identify snow and sea ice and could provide much better temperature profiles for different altitudes in the atmosphere.
But while the lower-altitude polar satellites can observe some phenomena in more detail because they are relatively close to the Earth, they can't provide the continuous "big picture" information a geosynchronous satellite can offer. So for the past 25 years, NOAA has operated two weather satellite systems - the TIROS series of polar orbiting satellites at lower altitudes, and two geosynchronous satellites more than 22,300 miles above the Earth.16
While polar-orbiting satellites were an improvement over the more equatorial-orbiting TIROS satellites, scientists realized that they could get a much better perspective on weather systems from a geosynchronous spacecraft. Goddard's research teams started investigating this technology with the launch of the first Applications Technology Satellite (ATS-1) in 1966. Because the ATS had a geosynchronous orbit that kept it "parked" above one spot, meteorologists could get progressive photographs of the same area over a period of time as often as every 30 minutes. The "satellite photos" showing changes in cloud cover that we now almost take for granted during nightly newscasts are made possible by geosynchronous weather satellites. Those cloud movement images also allowed meteorologists to infer wind currents and speeds. This information is particularly useful in determining weather patterns over areas of the world such as oceans or the tropics, where conventional aircraft and balloon methods can't easily gather data.
Goddard's ATS III satellite, launched in 1967, included a multi-color scanner that could provide images in color, as well. Shortly after its launch, ATS III took the first color image of the entire Earth, a photo made possible by the satellite's 22,300 mile high orbit.17
In 1974, Goddard followed its ATS work with a dedicated geosynchronous weather satellite called the Synchronous Meteorological Satellite (SMS). Both SMS -1 and SMS-2 were research prototypes, but they still provided meteorologists with practical information as they tested out new technology. In addition to providing continuous coverage of a broad area, the SMS satellites collected and relayed weather data from 10,000 automatic ground stations in six hours, giving forecasters more timely and detailed data than they had ever had before.
Goddard launched NOAA's first operational geostationary18 satellite, designated the Geostationary Operational Environmental Satellite (GOES) in October 1975. That satellite has led to a whole family of GOES spacecraft. As with previous operational satellites, Goddard managed the building, launching and testing of the GOES spacecraft.
The first seven GOES spacecraft, while geostationary, were still "spinning" designs like NOAA's earlier operational ESSA satellites. In the early 1980s, however, NOAA decided that it wanted the new series of geostationary GOES spacecraft to be three-axis stabilized, as well, and to incorporate significantly more advanced instruments. In addition, NOAA decided to award a single contract directly with an industry manufacturer for the spacecraft and instruments, instead of working separate instrument and spacecraft contracts through Goddard.
Goddard typically developed new instruments and technology on research satellites before putting them onto an operational spacecraft for NOAA. The plan for GOES 8,19 however, called for incorporating new technology instruments directly into a spacecraft that was itself a new design and also had an operational mission. Meteorologists across the country were going to rely on the new instruments for accurate weather forecasting information, which put a tremendous amount of added pressure on the designers. But the contractor selected to build the instruments underestimated the cost and complexity of developing the GOES 8 instruments. In addition, Goddard's traditional "Phase B" design study, which would have generated more concrete estimates of the time and cost involved in the instrument development, was eliminated on the GOES 8 project. The study was skipped in an attempt to save time, because NOAA was facing a potential crisis with its geostationary satellite system.
NOAA wanted to have two geostationary satellites up at any given point in order to adequately cover both coasts of the country. But the GOES 5 satellite failed in 1984, leaving only one geostationary satellite, GOES 6, in operation. The early demise of GOES 4 and GOES 5 left NOAA uneasy about how long GOES 6 would last, prompting the "streamlining" efforts on the GOES 8 spacecraft design. The problem became even more serious in 1986 when the launch vehicle for the GOES G spacecraft, which would have become GOES 7, failed after launch. Another GOES satellite was successfully launched in 1987, but the GOES 6 spacecraft failed in January 1989, leaving the United States once again with only one operational geostationary weather satellite.
By 1991, when the GOES 8 project could not predict a realistic launch date, because working instruments for the spacecraft still hadn't been developed, Congress began to investigate the issue. The GOES 7 spacecraft was aging, and managers and elected officials realized that it was entirely possible that the country might soon find itself without any geostationary satellite coverage at all.
To buy the time necessary to fix the GOES 8 project and alleviate concerns about coverage, NASA arranged with the Europeans to "borrow" one of their Eumetsat geostationary satellites. The satellite was allowed to "drift" further west so it sat closer to the North American coast, allowing NOAA to move the GOES 7 satellite further west.
Meanwhile, Goddard began to take a more active role in the GOES 8 project. A bigger GOES 8 project office was established at the Center and Goddard brought in some of its best instrument experts to work on the project, both at Goddard and at the contractor's facilities. Goddard, after all, had some of the best meteorological instrument-building expertise in the country. But because Goddard was not directly in charge of the instrument sub-contract, the Center had been handicapped in making that knowledge and experience available to the beleaguered contractor.
The project was a sobering reminder of the difficulties that could ensue when, in an effort to save time and money, designers attempted to streamline a development project or combine research and operational functions into a single spacecraft. But in 1994, the GOES 8 spacecraft was finally successfully launched, and the results have been impressive. Its advanced instruments performed as advertised, improving the spacecraft's focusing and atmospheric sounding abilities and significantly reducing the amount of time the satellite needed to scan any particular area. 20
Earth Resources Satellites
As meteorological satellite technology developed and improved, Goddard scientists realized that the same instruments used for obtaining weather information could be used for other purposes, as well. Meteorologists could look at radiation that travelled back up from the Earth's surface to determine things like water vapor content and temperature profiles at different altitudes in the atmosphere. But those same emissions could reveal potentially valuable information about the Earth's surface, as well.
Objects at a temperature above absolute zero emit radiation, many of them at precise and unique wavelengths in the electromagnetic spectrum. So by analyzing the emissions of any object, from a star or comet to a particular section of forest or farmland, scientists can learn important things about its chemical composition. Instruments on the Nimbus spacecraft had the ability to look at reflected solar radiation from the Earth in several different wavelengths. As early as 1964, scientists began discussing the possibilities of experimenting with this technology to see what it might be able to show us about not only the atmosphere, but also resources on the Earth.
The result was the Earth Resources Technology Satellite (ERTS), launched in 1972 and later given the more popular name "Landsat 1." The spacecraft was based on a Nimbus satellite,with a multi-channel radiometer to look at different wavelength bands where the reflected energy from surfaces such as forests, water, or different crops would fall. The satellite instruments also had much better resolution than the Nimbus instruments. Each swath of the Earth covered by the Nimbus scanner was 1500 miles wide, with each pixel in the picture representing five miles. The polar-orbiting ERTS satellite instrument could focus in on a swath only 115 miles wide, with each pixel representing 80 meters. This resolution allowed scientists to view a small enough section of land, in enough detail, to conduct a worthwhile analysis of what it contained.
Images from the ERTS/Landsat satellite, for example, showed scientists a 25-mile wide geological feature near Reno, Nevada that appeared to be a previously undiscovered meteor crater. Other images collected by the satellite were useful in discovering water-bearing rocks in Nebraska, Illinois and New York and determining that water pollution drifted off the Atlantic coast as a cohesive unit, instead of dissipating in the ocean currents.
The success of the ERTS satellite prompted scientists to want to explore this use of satellite technology further. They began working on instruments that could get pixel resolutions as high as five meters, but were told to discontinue that research because of national security concerns. If a civilian satellite provided data that detailed, it might allow foreign countries to find out critical information about military installations or other important targets in the U.S. This example illustrates one of the ongoing difficulties with Earth resource satellite research. The fact that the same information can be used for both scientific and practical purposes often creates complications with not only who should be responsible for the work, but how and where the information will be used.
In any event, the follow-on satellite, "Landsat-2," was limited to the same levels of resolution. More recent Landsat spacecraft, however, have been able to improve instrument resolution further.21
Landsat 2 was launched in January 1975 and looked at land areas for an even greater number of variables than its ERTS predecessor, integrating information from ground stations with data obtained by the satellite's instruments. Because wet land and green crops reflect solar energy at different wavelengths than dry soil or brown plants, Landsat imagery enabled researchers to look at soil moisture levels and crop health over wide areas, as well as soil temperature, stream flows, and snow depth. Its data was used by the U.S. Department of Agriculture, the U.S. Forest Service, the Department of Commerce, the Army Corps of Engineers, the Environmental Protection Agency and the Department of Interior, as well as agencies from foreign countries.22
The Landsat program clearly was a success, particularly from a scientific perspective. It proved that satellite technology could determine valuable information about precious natural resources, agricultural activity, and environmental hazards. The question was who should operate the satellites. Once the instruments were developed, the Landsat spacecraft were going to be collecting the same data, over and over, instead of exploring new areas and technology. One could argue that by examining the evolution of land resources over time, scientists were still exploring new processes and gathering new scientific information about the Earth. But that same information was being used predominantly for practical purposes of natural resource management, agricultural and urban planning, and monitoring environmental hazards. NASA had never seen its role as providing ongoing, practical information, but there was no other agency with the expertise or charter to operate land resource satellites.
As a result, NASA continued to manage the building, launch, and space operation of the Landsat satellites until 1984. Processing and distribution of the satellite's data was managed by the Department of Interior, through an Earth Resources Observation System (EROS) Data Center that was built by the U.S. Geological Survey in Sioux Falls, South Dakota in 1972.
In 1979, the Carter Administration developed a new policy in which the Landsat program would be managed by NOAA and eventually turned over to the private sector. In 1984, the first Reagan Administration put that policy into effect, soliciting commercial bids for operating the system, which at that point consisted of two operational satellites. Landsat 4 had been launched in 1982 and Landsat 5 was launched in 1984. Ownership and operation of the system was officially turned over to the EOSAT Company in 1985, which sold the images to anyone who wanted them, including the government. At the same time, responsibility for overseeing the program was transferred from NASA to NOAA. Under the new program guidelines, the next spacecraft in the Landsat program, Landsat 6, would also be constructed independently by industry.
There were two big drawbacks with this move, however, as everyone soon found out. The first was that although there was something of a market for Landsat images, it was nothing like that surrounding the communication satellite industry. The EOSAT company found itself struggling to stay afloat. Prices for images jumped from the couple of hundred dollars per image that EROS had charged to $4,000 per shot, and EOSAT still found itself bordering on insolvency.
Being a private company, EOSAT also was concerned with making a profit, not archiving data for the good of science or the government. Government budgets wouldn't allow for purchasing thousands of archival images at $4,000 a piece, so the EROS Data Center only bought a few selected images each year. As a result, many of the the scientific or archival benefits the system could have created were lost.
In 1992, the Land Remote Sensing Policy Act reversed the 1984 decision to commercialize the Landsat system, noting the scientific, national security, economic, and social utility of the Landsat images. Landsat 6 was launched the following year, but the spacecraft failed to reach orbit and ended up in the Indian Ocean.
This launch failure was discouraging, but planning for the next Landsat satellite was already underway. Goddard had agreed to manage design of a new data ground station for the satellite, and NASA and the Department of Defense initially agreed to divide responsibility for managing the satellite development. But the Air Force subsequently pulled out of the project and, in May 1994, management of the Landsat system was turned over to NASA, the U.S. Geological Survey (USGS), and NOAA. At the same time, Goddard assumed sole management responsibility for developing Landsat 7.
The only U.S. land resource satellites in operation at the moment are still Landsat 4 and 5, which are both degrading in capability. Landsat 5, in fact, is the only satellite still able to transmit images. The redesigned Landsat 7 satellite is scheduled for launch by mid-1999, and its data will once again be made available though the upgraded EROS facilities in Sioux Falls, South Dakota. Until then, scientists, farmers and other users of land resource information have to rely on Landsat 5 images through EOSAT, or they have to turn to foreign companies for the information.
The French and the Indians have both created commercial companies to sell land resource information from their satellites, but both companies are being heavily subsidized by their governments while a market for the images is developed. There is probably a viable commercial market that could be developed in the United States, as well. But it may be that the demand either needs to grow substantially on its own or would need government subsidy before a commercialization effort could succeed. The issue of scientific versus practical access to the information would also still have to be resolved.
No matter how the organization of the system is eventually structured, Landsat imagery has proven itself an extremely valuable tool for not only natural resource management but urban planning and agricultural assistance, as well. Former NASA Administrator James Fletcher even commented in 1975 that if he had one space-age development to save the world, it would be Landsat and its successor satellites.23 Without question, the Landsat technology has enabled us to learn much more about the Earth and its land-based resources. And as the population and industrial production on the planet increase, learning about the Earth and potential dangers to it has become an increasingly important priority for scientists and policy-makers alike.24
Atmospheric Research Satellites
One of the main elements scientists are trying to learn about the Earth is the composition and behavior of its atmosphere. In fact, Goddard's scientists have been investigating the dynamics of the Earth's atmosphere for scientific, as well as meteorological, purposes since the inception of the Center. Explorers 17, 19, and 32, for example, all researched various aspects of the density, composition, pressure and temperature of the Earth's atmosphere. Explorers 51 and 54, also known as "Atmosphere Explorers," investigated the chemical processes and energy transfer mechanisms that control the atmosphere.
Another goal of Goddard's atmospheric scientists was to understand and measure what was called the "Earth Radiation Budget." Scientists knew that radiation from the Sun enters the Earth's atmosphere. Some of that energy is reflected back into space, but most of it penetrates the atmosphere to warm the surface of the Earth. The Earth, in turn, radiates....
....energy back into space. Scientists knew that the overall radiation received and released was about equal, but they wanted to know more about the dynamics of the process and seasonal or other fluctuations that might exist. Understanding this process is important because the excesses and deficits in this "budget," as well as variations in it over time or at different locations, create the energy to drive our planet's heating and weather patterns.
The first satellite to investigate the dynamics of the Earth Radiation Budget was Explorer VII, launched in 1959. Nimbus 2 provided the first global picture of the radiation budget, showing that the amount of energy reflected by the Earth's atmosphere was lower than scientists had thought. Additional instruments on Nimbus 3, 5, and 6, as well as operational TIROS and ESSA satellites, explored the dynamics of this complex process further. In the early 1980s, researchers developed an Earth Radiation Budget Experiment (ERBE) instrument that could better analyze the short-wavelength energy received from the Sun and the longer-wavelength energy radiated into space from the Earth. This instrument was put on a special Earth Radiation Budget Satellite (ERBS) launched in 1984, as well as the NOAA-9 and NOAA 10 weather satellites.
This instrument has provided scientists with information on how different kinds of clouds affect the amount of energy trapped in the Earth's atmosphere. Lower, thicker clouds, for example, reflect a portion of the Sun's energy back into space, creating a...
....cooling effect on the surface and atmosphere of the Earth. High, thin cirrus clouds, on the other hand, let the Sun's energy in but trap some of the Earth's outgoing infrared radiation, reflecting it back to the ground. As a result, they can have a warming effect on the Earth's atmosphere. This warming effect can, in turn, create more evaporation, leading to more moisture in the air. This moisture can trap even more radiation in the atmosphere, creating a warming cycle that could influence the long-term climate of the Earth.
Because clouds and atmospheric water vapor seem to play a significant role in the radiation budget of the Earth as well as the amount of global warming and climate change that may occur over the next century, scientists are attempting to find out more about the convection cycle that transports water vapor into the atmosphere. In 1997, Goddard launched the Tropical Rainfall Measuring Mission (TRMM) satellite into a near-equatorial orbit to look more closely at the convection cycle in the tropics that powers much of the rest of the world's cloud and weather patterns. The TRMM satellite's Clouds and the Earth's Radiant Energy System (CERES) instrument, built by NASA's Langley Research Center, is an improved version of the earlier ERBE experiment. While the satellite's focus is on convection and rainfall in the lower atmosphere, some of that moisture does get transported into the upper atmosphere, where it can play a role in changing the Earth's radiation budget and overall climate.25
An even greater amount of atmospheric research, however, has been focused on a once little-known chemical compound of three oxygen atoms called ozone. Ozone, as most Americans now know, is a chemical in the upper atmosphere that blocks incoming ultraviolet rays from the Sun, protecting us from skin cancer and other harmful effects caused by ultraviolet radiation.
The ozone layer was first brought into the spotlight in the 1960s, when designers began working on the proposed Supersonic Transport (SST). Some scientists and environmentalists were concerned that the jet's high-altitude emissions might damage the ozone layer, and the federal government funded several research studies to evaluate the risk. The cancellation of the SST in 1971 shelved the issue, at least temporarily, but two years later a much greater potential threat emerged.
In 1973, two researchers at the University of California, Irvine came up with the astounding theory that certain man-made chemicals, called chlorofluorocarbons (CFCs), could damage the atmosphere's ozone layer. These chemicals were widely used in everything from hair spray to air conditioning systems, which meant that the world might have a dangerously serious problem on its hands.
In 1975, Congress directed NASA to develop a "comprehensive program of research, technology and monitoring of phenomena of the upper atmosphere" to evaluate the potential risk of ozone damage further. NASA was already conducting atmospheric research, but the Congressional mandate supported even wider efforts. NASA was not the only organization looking into the problem, either. Researchers around the world began focusing on learning more about the chemistry of the upper atmosphere and the behavior of ozone layer.
Goddard's Nimbus IV research satellite, launched in 1970, already had an instrument on it to analyze ultraviolet rays that were "backscattered," or reflected, from different altitudes in the Earth's atmosphere. Different wavelengths of UV radiation should be absorbed by the ozone at different levels in the atmosphere. So by analyzing how much UV radiation was still present in different wavelengths, researchers could develop a profile of how thick or thin the ozone layer was at different altitudes and locations.
In 1978, Goddard launched the last and most capable of its Nimbus-series satellites. Nimbus 7 carried an improved version of this experiment, called the Solar Backscatter Ultraviolet (SBUV) instrument. It also carried a new sensor called the Total Ozone Mapping Spectrometer (TOMS). As opposed to the SBUV, which provided a vertical profile of ozone in the atmosphere, the TOMS instrument generated a high-density map of the total amount of ozone in the atmosphere.
A similar instrument, called the SBUV-2, has been put on weather satellites since the early 1980s. For a number of years, the Space Shuttle periodically flew a Goddard instrument called the Shuttle Solar Backscatter Ultraviolet (SSBUV) experiment that was used to calibrate the SBUV-2 satellite instruments to insure the readings continued to be accurate. In the last couple of years, however, scientists have developed data-processing methods of calibrating the instruments, eliminating the need for the Shuttle experiments.
Yet it was actually not a NASA satellite that discovered the "hole" that finally developed in the ozone layer. In May 1985, a British researcher in Antarctica published a paper announcing that he had detected an astounding 40% loss in the ozone layer over a Antarctica the previous winter. When Goddard researchers went back and looked at their TOMS data from that time period, they discovered that the data indicated the exact same phenomenon. Indeed, the satellite indicated an area of ozone layer thinning, or "hole,"26 the size of the Continental U.S.
How had researchers missed a development that drastic? Ironically enough, it was because the anomaly was so drastic. The TOMS data analysis software had been programmed to flag grossly anomalous data points, which were assumed to be errors. Nobody had expected the ozone loss to be as great as it was, so the data points over the area where the loss had occurred looked like problems with the instrument or its calibration. .
Once the Nimbus 7 data was verified, Goddard's researchers generated a visual map of the area over Antarctica where the ozone loss had occurred. In fact, the ability to generate visual images of the ozone layer and its "holes" have been among the significant contributions NASA's ozone-related satellites have made to the public debate over the issue. Data points are hard for most people to fully understand. But for non-scientists, a visual image showing a gap in a protective layer over Antarctica or North America makes the problem not only clear, but somehow very real.
The problem then became determining what was causing the loss of ozone. The problem was a particularly sticky one, because it was going to relate directly to legislation and restrictions that would be extremely costly for industry. By 1978, the Environmental Protection Agency (EPA) had already moved to ban....
....the use of CFCs in aerosols. By 1985, the United Nations Environmental Program (UNEP) was calling on nations to take measures to protect the ozone and, in 1987, forty-three nations signed the "Montreal Protocol, agreeing to cut CFC production 50% by the year 2000.
The CFC theory was based on a prediction that chlorofluorocarbons, when they reached the upper atmosphere, released chlorine and flourine. The chlorine, it was suspected, was reacting with the ozone to form chlorine monoxide - a chemical that is able to destroy a large amount of ozone in a very short period of time. Because the issue was the subject of so much debate, NASA launched numerous research efforts to try to validate or disprove the theory. In addition to satellite observations, NASA sent teams of researchers and aircraft to Antarctica to take in situ readings of the ozone layer and the ozone "hole" itself. These findings were then supplemented with the bigger picture perspective the TOMS and SBUV instruments could provide.
The TOMS instrument on Nimbus 7 was not supposed to last more than a couple of years. But the information it was providing was considered so critical to the debate that Goddard researchers undertook an enormous effort to keep the instrument working, even as it aged and began to degrade. The TOMS instrument also hadn't been designed to show long-term trends, so the data processing techniques had to be significantly improved to give researchers that kind of information. In the end, Goddard was able to keep the Nimbus 7 TOMS instrument operating for almost 15 years, which provided ozone monitoring until Goddard was able to launch a replacement TOMS instrument on a Russian satellite in 1991.27
A more comprehensive project to study the upper atmosphere and and the ozone layer was launched in 1991, as well. The satellite, called the Upper Atmosphere Research Satellite (UARS), was one of the results of Congress's 1975 mandate for NASA to pursue additional ozone research. Although its goal is to try to understand the chemistry and dynamics of the upper atmosphere, the focus of UARS is clearly on ozone research. Original plans called for the spacecraft to be launched from the Shuttle in the mid-1980s, but the Challenger explosion back-up delayed its launch until 1991.
Once in orbit, however, the more advanced instruments on board the UARS satellite were able to map chlorine monoxide levels in the stratosphere. Within months, the satellite was able to confirm what the Antarctic....
....aircraft expeditions and Nimbus-7 satellite had already reported - that there was a clear and causal link between levels of chlorine, formation of chlorine monoxide, and levels of ozone loss in the upper atmosphere.
Since the launch of UARS, the TOMS instrument has been put on several additional satellites to insure that we have a continuing ability to monitor changes in the ozone layer. A Russian satellite called Meteor 3 took measurements with a TOMS instrument from 1991 until the satellite ceased operating in 1994. The TOMS instrument was also incorporated into a Japanese satellite called the Advanced Earth Observing System (ADEOS) that was launched in 1996. ADEOS, which researchers hoped could provide TOMS coverage until the next scheduled TOMS instrument launch in 1999, failed after less than a year in orbit. But fortunately, Goddard had another TOMS instrument ready for launch on a small NASA satellite called an Earth Probe, which was put into orbit with the Pegasus launch vehicle in 1996. Researchers hope that this instrument will continue to provide coverage and data until the next scheduled TOMS instrument launch.
All of these satellites have given us a much clearer picture of what the ozone layer is, how it interacts with various other chemicals, and what causes it to deteriorate. These pieces of information are essential elements for us to have if we want to figure out how best to protect what is arguably one of our most precious natural resources.
Using the UARS satellite, scientists have been able to track the progress of CFCs up into the stratosphere and have detected the build-up of chlorine monoxide over North America and the Arctic as well as Antarctica. Scientists also have discovered that ozone loss is much greater when the temperature of the stratosphere is cold. In 1997, for example, particularly cold stratospheric temperatures created the first Antarctic-type of ozone hole over North America.
Another factor in ozone loss is the level of aerosols, or particulate matter, in the upper atmosphere. The vast majority of aerosols come from soot, other pollution, or volcanic activity, and Goddard's scientists have been studying the effects of these particles in the atmosphere ever since the launch of the Nimbus I spacecraft in 1964. Goddard's 1984 Earth Radiation Budget Satellite (ERBS), which is still operational, carries a Stratospheric Aerosol and Gas Experiment (SAGE II) that tracks aerosol levels in the lower and upper atmosphere. The Halogen Occultation Experiment (HALOE) instrument on UARS also measures aerosol intensity and distribution.
In 1991, both UARS and SAGE II were used to track the movement and dispersal of the massive aerosol cloud created by the Mt. Pinatubo volcano eruption in the Philippines. The eruption caused stratospheric aerosol levels to increase to as much as 100 times their pre-eruption levels, creating spectacular Sunsets around the world but causing some other effects, as well. These volcanic clouds appear to help cool the Earth, which could affect global warming trends, but the aerosols in these clouds seem to increase the amount of ozone loss in the stratosphere, as well.
The good news is, the atmosphere seems to be beginning to heal itself. In 1979 there was no ozone hole. Throughout the 1980s, while legislative and policy debates raged over the issue, the hole developed and grew steadily larger. In 1989, most U.S. companies finally ceased production of CFC chemicals and, in 1990, the U.N. strengthened its Montreal Protocol to call for the complete phaseout of CFCs by the year 2000. Nature is slow to react to changes in our behavior but, by 1997, scientists finally began to see a levelling out and even a slight decrease in chlorine monoxide levels and ozone loss in the upper atmosphere.28
Continued public interest in this topic has made ozone research a little more complicated for the scientists involved. Priorities and pressures in the program have changed along with Presidential administrations and Congressional agendas and, as much as scientists can argue that data is simply data, they cannot hope to please everyone in such a politically charged arena. Some environmentalists argue that the problem is much worse than NASA is making it out to be, while more conservative politicians have argued that NASA's scientists are blowing the issue out of proportion.29
But at this point a few things are clearer. The production of CFC chemicals was, in fact, harming a critical component of our planet's atmosphere. It took a variety of ground and space instruments to detect and map the nature and extent of the problem. But the perspective offered by Goddard's satellites allowed scientists and the general public to get a clear overview of the problem and map the progression of events that caused it. This information has had a direct impact on changing the world's industrial practices which, in turn, have begun to slow the damage and allow the planet to heal itself. The practical implications of Earth-oriented satellite data may make life a little more complicated for the scientists involved, but no one can argue the significance or impact of the work. By developing the technology to view and analyze the Earth from space, we have given ourselves an invaluable tool for helping us understand and protect the planet on which we live.
One of the biggest advantages to remote sensing of the Earth from satellites stems from the fact that the majority of the Earth's surface area is extremely difficult to study from the ground. The world's oceans cover 71% of the Earth's surface and comprise 99% of its living area. Atmospheric convective activity over the tropical ocean area is believed to drive a significant amount of the world's weather. Yet until recently, the only way to map or analyze this powerful planetary element was with buoys, ships or aircraft. But these methods could only obtain data from various individual points, and the process was extremely difficult , expensive, and time-consuming.
Satellites, therefore, offered oceanographers a tremendous advantage. A two-minute ocean color satellite image, for example, contains more measurements than a ship travelling 10 knots could make in a decade. This ability has allowed scientists to learn a lot more about the vast open stretches of ocean that influence our weather, our global climate, and our everyday lives.30
Although Goddard's early meteorological satellites were not geared specifically toward analyzing ocean characteristics, some of the instruments could provide information about the ocean as well as the atmosphere. The passive microwave sensors that allowed scientists to "see" through clouds better, for example, also let them map the distribution of sea ice around the world. Changes in sea ice distribution can indicate climate changes and affect sea levels around the world, which makes this an important parameter to monitor. At the same time, this information also has allowed scientists to locate open passageways for ships trying to get through the moving ice floes of the Arctic region.
By 1970, NOAA weather satellites also had instruments that could measure the temperature of the ocean surface in areas where there was no cloud cover, and the Landsat satellites could provide some information on snow and ice distributions. But since the late 1970s, much more sophisticated ocean-sensing satellite technology has emerged.31
The Nimbus 7 satellite, for example, carried an improved microwave instrument that could generate a much more detailed picture of sea ice distribution than either...
...the earlier Nimbus or Landsat satellites. Nimbus 7 also carried the first Coastal Zone Color Scanner (CZCS), which allowed scientists to map pollutants and sediment near coastlines. The CZCS also showed the location of ocean phytoplankton around the world. Phytoplankton are tiny, carbon dioxide-absorbing plants that constitute the lowest rung on the ocean food chain. So phytoplankton generally mark spots where larger fish may be found. But because they bloom where nutrient-rich water from the deep ocean comes up near the surface, their presence also gives scientists clues about the ocean's currents and circulation.
Nimbus 7 continued to send back ocean color information until 1984. Scientists at Goddard continued working on ocean color sensor development...
....throughout the 1980s, and a more advanced coastal zone ocean color instrument was launched on the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) satellite in 1997. In contrast to most scientific satellites, SeaWiFS was funded and launched by a private company instead of by NASA. Most of the ocean color data the satellite provides is purchased by NASA and other research institutions, but the company is selling some data to the fishing industry, as well.32
Since the launch of the Nimbus 7 and Tiros-N satellites in 1978, scientists have also been able to get much better information on global ocean surface temperatures. Sea surface temperatures tell scientists about ocean circulation, because they can use the temperature information to track the movement of warmer and cooler bodies of water. Changes in sea surface temperatures can also indicate the development of phenomena such as El Nino climate patterns. In fact, one of the most marked indications of a developing El Nino condition, which can cause heavy rains in some parts of the world and devastating drought in others, is an unusually warm tongue of water moving eastward from the western equatorial Pacific Ocean.
NOAA weather satellites have carried instruments to measure sea surface temperature since 1981, and NASA's EOS AM-1 satellite, scheduled for launch in 1999, incorporates an instrument that can measure those temperatures with even more precision. The launch of Nimbus 7 also gave researchers the ability to look at surface winds, which help drive ocean circulation. With Nimbus 7, however, scientists had to infer surface winds by looking at slight differentiations in microwave emissions coming from the ocean surface. A scatterometer designed specifically to measure surface winds was not launched until the Europeans launched ERS-1 in 1991. Another scatterometer was launched on the Japanese ADEOS spacecraft in 1996. Because ADEOS failed less than a year after launch, Goddard researchers have begun an intensive effort to launch another scatterometer, called QuickSCAT, on a NASA spacecraft. JPL project managers are being aided in this effort by the Goddard-developed Rapid Spacecraft Procurement Initiative, which will allow them to incorporate the instrument into an existing small spacecraft design.Using this streamlined process, scientists hope to have QuickSCAT in orbit by the end of 1998.33
In the 1970s, researchers at the Wallops Flight Facility also began experimenting with radar altimetry to determine sea surface height, although they were pleased if they could get accuracy within a meter. In 1992, however, a joint satellite project between NASA and the French Centre National d'Etudes Spatiales (CNES) called TOPEX/Poseidon put a much more accurate radar altimeter into orbit. Goddard managed the development of the TOPEX radar altimeter, which can measure sea surface height within a few centimeters. In addition to offering useful information for maritime weather reports, this sea level data tells scientists some important things about ocean movement.
For one thing, sea surface height indicates the build-up of water in one area of the world or another. One of the very first precursors to an El Nino condition, for example, is a rise in ocean levels in the western equatorial Pacific, caused by stronger-than-normal easterly trade winds. Sea level also tells scientists important information about the amount of heat the ocean is storing. If the sea level in a particular area is low, it means that the area of warm, upper-level water is shallow. This means that colder, deeper water can reach the surface there, driving ocean circulation and bringing nutrients up from below, leading to the production of phytoplankton. The upwelling of cold water will also cool down the sea surface temperature, reducing the amount of water that evaporates into the atmosphere.
All of these improvements in satellite capabilities gave oceanographers and scientists an opportunity to integrate on-site surface measurements from buoys or ships with the more global perspective available from space. As a result, we are finally beginning to piece together a more complete picture of our oceans and the role they play in the Earth's biosystems and climate. In fact, one of the most significant results of ocean-oriented satellite research was the realization that ocean and atmospheric processes were intimately linked to each other. To really understand the dynamics of the ocean or the atmosphere, we needed to look at the combined global system they comprised.34
El Nino and Global Change
The main catalyst that prompted scientists to start looking at the oceans and atmosphere as an integrated system was the El Nino event of 1982-83. The rains and drought associated with the unusual weather pattern caused eight billion dollars of damage, leading to several international research programs to try to understand and predict the phenomenon better. The research efforts included measurements by ships, aircraft, ocean buoys, and satellites, and the work is continuing today. But by 1996, scientists had begun to understand the warning signals and patterns of a strong El Nino event. They also had the technology to track atmospheric wind currents and cloud formation, ocean color, sea surface temperatures, sea surface levels and sea surface winds, which let them accurately predict the heavy rains and severe droughts that occurred at points around the world throughout the 1997-98 winter.
The reason the 1982-83 El Nino prompted a change to a more integrated ocean-atmospheric approach is that the El Nino phenomenon does not exist in the ocean or the atmosphere by itself. It's the coupled interactions between the two elements that cause this periodic weather pattern to occur. The term El Nino, which means "The Child," was coined by fishermen on the Pacific coast of Central America who noticed a warming of their coastal ocean waters, along with a decline in fish population, near the Christ Child's birthday in December. But as scientists have discovered, the sequence of events that causes that warming begins many months earlier, in winds headed the opposite direction.
In a normal year, strong easterly trade winds blowing near the equator drag warmer, upper-level ocean water to the western edge of the Pacific ocean. That build-up of warm water causes convection....
....up into the tropical atmosphere, leading to rainfall along the Indonesian and Australian coastlines. It also leads to upwelling of colder, nutrient-rich water along the eastern equatorial Pacific coastlines, along Central and South America. In an El Nino year, however, a period of stronger-than-normal trade winds that significantly raises sea levels in the western Pacific is followed by a sharp drop in those winds. The unusually weak trade winds allow the large build-up of warm water in the western tropical Pacific to flow eastward along the equator. That change moves the convection and rainfall off the Indonesian and Australian coasts, causing severe drought in those areas and, as the warm water reaches the eastern edge of the Pacific ocean, much heavier than normal rainfall occurs along the western coastlines of North, Central, and South America. The movement of warm water toward the eastern Pacific also keeps the colder ocean water from coming up to the surface, keeping phytoplankton from growing and reducing the presence of fish further up on the food chain.
In other words, an El Nino is the result of a change in atmospheric winds, which causes a change in ocean currents and sea level distribution, which causes a change in sea surface temperature, which causes a change in water vapor entering the atmosphere, which causes further changes in the wind currents, and so on, creating a cyclical pattern. Scientists still don't know exactly what causes the initial change in atmospheric winds, but they now realize that they need to look at a global system of water, land and air interactions in order to find the answer. And satellites play a critical role in being able to do that.
An El Nino weather pattern is the biggest short-term "coupled" atmospheric and oceanographic climate signal on the planet after the change in seasons, which is why it prompted researchers to take a more interdisciplinary approach to studying it. But scientists are beginning to realize that many of the Earth's climatic changes or phenomena are really coupled events that require a broader approach in order to understand. In fact, the 1990s have seen the emergence of a new type of scientist who is neither oceanographer or atmospheric specialist, but is an amphibious kind of researcher focusing on the broader issue of climate change.35
One of the other important topics these researchers are currently trying to assess is the issue of global warming. Back in 1896, a Swedish chemist named Svante Arrhenius predicted that the increasing carbon dioxide emissions from the industrial revolution would eventually cause the Earth to become several degrees warmer. The reason for this warming was due to what has become known as the "greenhouse effect." In essence, carbon dioxide and other "greenhouse gases," such as water vapor, allow the short-wavelength radiation from the Sun to pass through the atmosphere, warming the Earth. But the gases absorb the longer-wavelength energy travelling back from the Earth into space, radiating part of that energy back down to the Earth again. Just as the glass in a greenhouse allows the Sun through but traps the heat inside, these gases end up trapping a certain amount of heat in the Earth's atmosphere, causing the Earth to become warmer.
The effect of this warming could be small or great, depending on how much the temperature actually changes. If it is only a degree or two, the effect would be relatively small. But a larger change in climate could melt polar ice, causing the sea level to rise several feet and wiping out numerous coastal communities and resources. If the warming happened rapidly, vegetation might not have time to adjust to the climate change, which could affect the world's food supply as well as timber and other natural resources.
The critical question, then, is how great a danger global warming is. And the answer to that is dependent on numerous factors. One, obviously, is the amount of carbon dioxide and other emissions we put into the air - a concern that has driven efforts to reduce our carbon dioxide-producing fossil fuel consumption. But the amount of carbon dioxide in the air is also dependent on how much can be absorbed again by plant life on Earth - a figure that scientists depend on satellites in order to compute. Landsat images can tell scientists how much deforestation is occurring around the world, and how much healthy plant life remains to absorb CO2. Until recently, however, the amount of CO2 absorbed by the world's oceans was unknown. The ocean color images of SeaWiFS are helping to fill that gap, because the phytoplankton it tracks are a major source of carbon dioxide absorption in the oceans.
Another part of the global warming equation is how much water vapor is in the atmosphere - a factor that is driven by ocean processes, especially in the heat furnace of the tropics. As a result, scientists are trying to learn more about the transfer of heat and water vapor between the ocean and different levels of the atmosphere, using tools such as Goddard's TRMM and UARS satellites.
All of these numbers and factors are fed into atmospheric and global computer models, many of which have been developed at the Goddard Institute for Space Studies (GISS) in New York City. These models then try to predict how our global climate may change based on current emissions, population trends, and known facts about ocean and atmospheric processes.
While these models have been successful in predicting short-term effects, such as the global temperature drop after the Mt. Pinatubo volcano eruption, the problem with trying to predict global change is that it's a very long-term process, with many factors that may change over time. We have only been studying the Earth in bits and pieces, and for only a short number of years. In order to really understand which climate changes are short-term variations and which ones are longer trends of more permanent change, scientists needed to observe and measure the global, integrated climate systems of Planet Earth over a long period of time. This realization was the impetus for NASA's Mission to Planet Earth, or the Earth Science Enterprise.36
Earth Science Enterprise
In some senses, the origins of what became NASA's "Mission to Planet Earth" (MTPE) began in the late 1970s, when we began studying the overall climate and planetary processes of other planets in our solar system. Scientists began to realize that we had never taken that kind of "big picture" look at our own planet, and that such an effort might yield some important and fascinating results. But an even larger spur to the effort was simply the development of knowledge and technology that gave scientists both the capability and an understanding of the importance of looking at the Earth from a more global, systems perspective.
Discussions along these lines were already underway when the El Nino event of 1982-83 and the discovery of the ozone "hole" in 1985 elevated the level of interest and support for global climate change research to an almost crisis level. Although the "Mission to Planet Earth" was not announced as a formal new NASA program until 1990, work on the satellites to perform the mission was underway before that. In 1991, Goddard's UARS satellite became the first official MTPE spacecraft to be launched.
Although the program has now changed its name to the Earth Science Enterprise, suffered several budget cuts, and refocused its efforts from overall global change to a narrower focus of global climate change (leaving out changes in solid land masses), the basic goal of the program echoes what was initiated in 1990. In essence, the Earth Science Enterprise aims to integrate satellite, aircraft and ground-based instruments to monitor 24 interrelated processes and parameters in the planet's oceans and atmosphere over a 15-year period.
Phase I of the program consisted of integrating information from satellites such as UARS, the TOMS Earth Probe, TRMM, TOPEX/Poseidon, ADEOS and SeaWiFS with Space Shuttle research payloads, research aircraft and ground station observations. Phase II is scheduled to begin in 1999 with the launch of Landsat 7 and the first in a series of Earth Observing System (EOS) satellites. The EOS spacecraft are extremely large research platforms with many different instruments to look at various atmospheric and ocean processes that affect natural resources and the overall global climate. They will be polar-orbiting satellites, with orbital paths that will allow the different satellites to take measurements at different times of the day. EOS AM-1 is scheduled for launch in late 1998. EOS PM-1 is scheduled for launch around the year 2000. The first in an EOS altimetry series of satellites, which will study the role of oceans, ocean winds and ocean-atmosphere interactions in climate systems, will launch in 2000. An EOS CHEM-1 satellite, which will look at the behavior of ozone and greenhouse gases, measure pollution and the effect of aerosols on global climate, is scheduled for launch in 2002. Follow-on missions will continue the work of these initial observation satellites over a 15-year period.
There is still much we don't know about our own planet. Indeed, the first priority of the Earth Science Enterprise satellites is simply to try to fill in the gaps in what we know about the behavior and dynamics of our oceans and our atmosphere. Then scientists can begin to look at how those elements interact, and what impact they have and will have on global climate and climate change. Only then will we really know how great a danger global warming is, or how much our planet can absorb the man-made elements we are creating in greater and greater amounts.37
It's an ambitious task. But until the advent of satellite technology, the job would have been impossible to even imagine undertaking. Satellites have given us the ability to map and study large sections of the planet that would be difficult to cover from the planet's surface. Surface and aircraft measurements also play a critical role in these studies. But satellites were the breakthrough that gave us the unique ability to stand back far enough from the trees to see the complete and complex forest in which we live.
For centuries, humankind has stared at the stars and dreamed of travelling among them. We imagined ourselves zipping through asteroid fields, transfixed by spectacular sights of meteors, stars, and distant galaxies. Yet when the astronauts first left the planet, they were surprised to find themselves transfixed not by distant stars, but by the awe-inspiring view their spaceship gave them of the place they had just left - a dazzling, mysterious planet they affectionally nicknamed the "Big Blue Marble." As our horizons expanded into the universe, so did our perspective and understanding of the place we call home. As an astronaut on an international Space Shuttle crew put it, "The first day or so we all pointed to our countries. The third or fourth day we were pointing to our continents. By the fifth day we were aware of only one Earth."38
Satellites have given this perspective to all of us, expanding our horizons and deepening our understanding of the planet we inhabit. If the world is suddenly a smaller place, with cellular phones, paging systems, and Internet service connecting friends from distant lands, it's because satellites have advanced our communication abilities far beyond anything Alexander Graham Bell ever imagined. If we have more than a few hours' notice of hurricanes or storm fronts, it's because weather satellites have enabled meteorologists to better understand the dynamics of weather systems and track those systems as they develop around the world. If we can detect and correct damage to our ozone layer or give advance warning of a strong El Nino winter, it's because satellites have helped scientists better understand the changing dynamics of our atmosphere and our oceans.
We now understand that our individual "homes" are affected by events on the far side of the globe. From both a climatic and environmental perspective, we have realized that our home is indeed "one Earth," and we need to look at its entirety in order to understand and protect it. The practical implications of this information sometimes make the scientific pursuit of this understanding more complicated than our explorations into the deeper universe. But no one would argue the inherent worth of the information or the advantages satellites offer.
The satellites developed by Goddard and its many partners have expanded both our capabilities and our understanding of the complex processes within our Earth's atmosphere. Those efforts may be slightly less mind-bending than our search for space-time anomalies or unexplainable black holes, but they are perhaps even more important. After all, there may be millions of galaxies in the universe. But until we find a way to reach them, this planet is the only one we have. And the better we understand it, the better our chances are of preserving it - not only for ourselves, but for the generations to come. | http://history.nasa.gov/SP-4312/ch5.htm | 13 |
31 | By Jodi Beggs, About.com Guide
- Introduction to Economics
- The Supply and Demand Model
- Market Equilibrium
- Changes in Equilibrium
- Utility Maximization
- Production and Profit Maximization
- Types of Markets
- Government Regulation
- Externalities and Public Goods
Introduction to Economics
This category provides a very basic introduction to the field of economics and prepares students and readers to move on to the other sections of the site.
- What Is Economics?
- Microeconomics vs. Macroeconomics
- Economics as the "Dismal Science?"
- Positive Versus Normative Analysis in Economics
The Supply and Demand Model
This category goes through the basics of the supply and demand model that is widely used in economics.
This category introduces the concept of demand and the demand curve. It also explains what shifts the demand curve and goes through some demand curve algebra.
This category introduces the concept of supply and the supply curve. It also explains what shifts the supply curve and goes through some supply curve algebra.
This category introduces the concept of market equilibrium and explains how equilibrium is calculated.
- Supply and Demand Equilibrium
- Calculating Economic Equilibrium
- Supply & Demand Practice Question
- 10 Supply and Demand Practice Questions
- The Effects of a Black Market Using Supply and Demand
Changes in Equilibrium
This category introduces the concept of changes in equilibrium, or comparative statics, and explains how to calculate those changes both qualitatively and quantitatively.
This category introduces the concept of elasticity and shows how it is calculated. It also outlines the different types of elasticity and illustrates an important application of the elasticity concept.
- A Beginner's Guide to Elasticity
- Price Elasticity of Demand
- Price Elasticity of Supply
- Income Elasticity of Demand
- Cross-Price Elasticity of Demand
- Arc Elasticity
- Using Calculus To Calculate Price Elasticity of Demand
- Using Calculus To Calculate Price Elasticity of Supply
- Using Calculus To Calculate Income Elasticity of Demand
- Using Calculus To Calculate Cross-Price Elasticity of Demand
- Elasticity Practice Question
- What's the Price Elasticity of Demand for Gasoline?
- Adventures in Downward Sloping Demand Curves and Elasticity
This category introduces the utility maximization framework and shows how consumers' demand is derived.
Production and Profit Maximization
This category introduces various measures of revenue, cost and profit and shows how firms make production decisions to maximize profit.
- The Production Possibilities Frontier
- What are Opportunity Costs?
- Opportunity Costs and Tradeoffs
- Baseball Players and Opportunity Costs
- A Season Later: Baseball Players and Opportunity Costs
- Introduction to Revenue
- Understanding Costs - How to Understand and Calculate Cost Measures
- Marginal Revenue and Marginal Cost Practice Question
Types of Markets
This category gives an overview of different types of market structures and describes how prices and quantities are set in each.
- Introduction to Competitive Markets
- What You Need to Know About Monopolies
- Federal Efforts to Control Monopoly
- Monopolies, Mergers, and Restructuring
- Introduction to Monopolistic Competition
This category analyzes the impact of various types of government intervention on the amount of value created in a market.
- The Effect of Income Taxes on Economic Growth
- How Good Intentions Lead to Crushing Marginal Tax Rates on the Working Poor
- FairTax - Income Taxes vs. Sales Taxes
- Should Income Tax Rates Depend on Lifetime Earnings?
- Payroll Tax Reduction - One Approach to a Carbon Tax
- Do Richer People Pay a Higher Proportion of Tax Under a Flat Tax?
- Gas Tax and Carbon Tax FAQ
- Oregon's Mileage Tax: A Truly Bad Idea
- How Do High Small Business Corporate Tax Rates Hurt The Economy?
- Should Governments Legalize and Tax Marijuana?
Externalities and Public Goods
This category introduces the concept of externalities, or market side effects, and discusses the effect of externalities on the value created by a market. It also introduces various types of goods that result in market failures. | http://economics.about.com/od/economics-basics/u/Microeconomics-101.htm | 13 |
17 | Resources for Teachers
More on Money
- Money, Money, Money
- More about the history of money from the Federal Reserve Bank of San Francisco’s “American Currency” exhibit.
- Our Money
- Teaching unit on U.S. currency developed by the Federal Reserve Bank of Minneapolis.
- The Story of Money
- Exhibits from the Federal Reserve Bank of Atlanta’s Monetary Museum. The story of money told through artifacts, coins, and currency.
- Money Facts
- Tidbits and trivia about U.S. currency from the U.S. Bureau of Engraving and Printing.
Education, Curricula, and Classroom Activities
- Money Math: Lessons for Life
- How can you connect math to the real world? Money Math: Lessons for Life, now in its second printing, is a four-lesson curriculum supplement for middle school math classes, teaching grade 7 to 9 math concepts using real-life examples from personal finance. The 86-page book is a teacher’s guide with lesson plans, reproducible activity pages, and teaching tips.
- Links to free print and audiovisual materials to increase understanding of the Federal Reserve, economics, and financial education.
- Board of Governors Kids Page
- Site designed to educate middle-school students about the Federal Reserve’s Board of Governors. Information presented in a question-and-answer format, including a quiz. (Federal Reserve Board of Governors).
- Ohio Council on Economic Education
- This organization strives to help teachers learn about market economics. Site includes information on teachers’ professional development and the Economics Challenge, a competition for high school students.
- AskERIC Lesson Plans
- Economics-based lesson plans and activities for students in grades 4-12, developed by the Educational Resources Information Center. | http://clevelandfed.org/learning_center/For_Teachers/resources_for_teachers/index.cfm | 13 |
20 | This year, Germany finally paid off its old bonds for World War 1 reparations, as Margaret MacMillan has noted in the New York Times. MacMillan asserts that “John Maynard Keynes, a member of the British delegation in Paris, rightly argued that the Allies should have forgotten about reparations altogether.” Actually, the truth is more complicated. A fuller understanding of Keynes’s role in the 1919 Paris peace conference after World War 1 may also offer a useful perspective on his contributions to economics.
Keynes became the most famous economist of his time, not for his 1936 General Theory, but for his Economic Consequences of the Peace (1920) and A Revision of the Treaty (1922). These were brilliant polemics against the 1919 peace conference, exposing the folly of imposing on Germany a reparation debt worth more than 3 times its prewar annual GDP, which was to be repaid over a period of decades.
Germans saw the reparations as unjust extortion, and efforts to accommodate the Allies’ demands undermined the government’s legitimacy, leading to the rise of Nazism and the coming of a second world war. Keynes seemed to foresee the whole disaster. In his 1922 book, he posed the crucial question: “Who believes that the Allies will, over a period of one or two generations, exert adequate force over the German government to extract continuing fruits on a vast scale from forced labor?”
But what Keynes actually recommended in 1922 was that Germany should be asked to pay in reparations about 3% of its prewar GDP annually for 30 years. The 1929 Young Plan offered Germany similar terms and withdrew Allied occupation forces from the German Rhineland, but the Nazis’ rise to national power began after that.
In his 1938 memoirs, Lloyd George tells us that, during World War 1, Germany also had plans to seize valuable assets and property if they won WW1, “but they had not hit on the idea of levying a tribute for 30 to 40 years on the profits and earnings of the Allied peoples. Mr. Keynes is the sole patentee and promoter of that method of extraction.”
How did Keynes get it so wrong on reparations? In 1871, after the Franco-Prussian War, Germany demanded payments from France, on a less vast scale (only a fraction of France’s annual GDP), while occupying northern France. To hasten the withdrawal of German troops, France made the payments well ahead of the required 3-year schedule, mainly by selling bonds to its own citizens. But the large capital inflow destabilized Germany’s financial system, which then led to a recession in Germany. Before 1914, some argued that such adverse consequences of indemnity payments for a victor’s economy would eliminate incentives for war and assure world peace. In response to such naive arguments, Keynes suggested in 1916 that postwar reparation payments could be extended over decades to avoid macroeconomic shock from large short-term capital flows and imports from Germany.
Nobody had ever tried to extract payments over decades from a defeated nation without occupying it, but that is what the Allies attempted after World War 1, following Keynes’s suggestion. Keynes argued about the payments’ size but not their duration.
Today economists regularly analyze the limits on a sovereign nation’s incentive to pay external debts. In our modern analytical framework, we can argue that the scenario of long-term reparation payments was not a sequential equilibrium. But such analysis uses game-theoretic models that were unknown to Keynes. As a brilliant observer, he certainly recognized the political problems of motivating long-term reparation payments over 30 years or more, but these incentive problems did not fit into the analytical framework that guided him in formulating his policy recommendations. So while condemning the Allies’ demands for Germany to make long-term reparation payments of over 7% of its GDP, Keynes considered long-term payments of 3% of GDP to be economically feasible for Germany, regardless of how politically poisonous such payments might be for its government. Considerations of macroeconomic stability could crowd out strategic incentive analysis for Keynes, given the limits of economic analysis in his time.
Reviewing this history today, we should be impressed both by Keynes’s skill as a critical observer of great policy decisions but also by the severe limits of Keynes’s analytical framework for suggesting better policies. Advances in economic theory have greatly expanded the scope of economic analysis since Keynes’s day and have given us a better framework for policy analysis than what Keynes ever had. | http://cheaptalk.org/2010/12/27/keynes-and-the-ww1-reparations/ | 13 |
54 | Deforestation is the logging or burning of trees in forested areas. There are several reasons for doing so: trees or derived charcoal can be sold as a commodity and are used by humans while cleared land is used as pasture, plantations of commodities and human settlement. The removal of trees without sufficient reforestation, has resulted in damage to habitat, biodiversity loss and aridity. Also deforestated regions often degrade into wasteland.
Disregard or unawareness of intrinsic value, and lack of ascribed value, lax forest management and environmental law allow deforestation to occur on such a large scale. In many countries, deforestation is an ongoing issue which is causing extinction, changes to climatic conditions, desertification and displacement of indigenous people.
In simple terms, deforestation occurs because forested land is not economically viable. Increasing the amount of farmland, woods are used by native populations of over 200 million people worldwide.
The presumed value of forests as a genetic resources has never been confirmed by any economic studies . As a result owners of forested land lose money by not clearing the forest and this affects the welfare of the whole society . From the perspective of the developing world, the benefits of forest as carbon sinks or biodiversity reserves go primarily to richer developed nations and there is insufficient compensation for these services. As a result some countries simply have too much forest. Developing countries feel that some countries in the developed world, such as the United States of America, cut down their forests centuries ago and benefited greatly from this deforestation and that it is hypocritical to deny developing countries the same opportunities: that the poor shouldn’t have to bear the cost of preservation when the rich created the problem .
Aside from a general agreement that deforestation occurs to increase the economic value of the land there is no agreement on what causes deforestation. Logging may be a direct source of deforestation in some areas and have no effect or be at worst an indirect source in others due to logging roads enabling easier access for farmers wanting to clear the forest: experts do not agree on whether logging is an important contributor to global deforestation and some believe that logging makes considerable contribution to reducing deforestation because in developing countries logging reserves are far larger than nature reserves . Similarly there is no consensus on whether poverty is important in deforestation. Some argue that poor people are more likely to clear forest because they have no alternatives, others that the poor lack the ability to pay for the materials and labour needed to clear forest. . Claims that that population growth drives deforestation is weak and based on flawed data. with population increase due to high fertility rates being a primary driver of tropical deforestation in only 8% of cases . The FAO states that the global deforestation rate is unrelated to human population growth rate, rather it is the result of lack of technological advancement and inefficient governance . There are many causes at the root of deforestation, such as the corruption and inequitable distribution of wealth and power, population growth and overpopulation, and urbanization. Globalization is often viewed as a driver of deforestation.
According to British environmentalist Norman Myers, 5% of deforestation is due to cattle ranching, 19% to over-heavy logging, 22% due to the growing sector of palm oil plantations, and 54% due to slash-and-burn farming.
It's very difficult, if not impossible, to obtain figures for the rate of deforestation . The FAO data are based largely on reporting from forestry departments of individual countries. The World Bank estimates that 80% of logging operations are illegal in Bolivia and 42% in Colombia, while in Peru, illegal logging equals 80% of all activities. For tropical countries, deforestation estimates are very uncertain: based on satellite imagery, the rate of deforestation in the tropics is 23% lower than the most commonly quoted rates and for the tropics as a whole deforestation rates could be in error by as much as +/- 50% . Conversely a new analysis of satellite images reveal that the deforestation in the Amazon basin is twice as fast as scientists previously estimated.
The UNFAO has the best long term datasets on deforestation available and based on these datasets global forest cover has remained approximately stable since the middle of the twentieth century ) and based on the longest dataset available global forest cover has increased since 1954 . The rate of deforestation is also declining, with less and less forest cleared each decade. Globally the rate of deforestation declined during the 1980s, with even more rapid declines in the 1990s and still more rapid declines from 2000 to 2005 . Based on these trends global anti-deforestation efforts are expected to outstrip deforestation within the next half-century with global forest cover increasing by 10 percent—an area the size of India—by 2050.Rates of deforestation are highest in developing tropical nations, although globally the rate of tropical forest loss is also declining, with tropical deforestation rates of about 8.6 million hectares annually in the 1990s, compared to a loss of around 9.2 million hectares during the previous decade. .
The utility of the FAO figures have been disputed by some environmental groups. These questions are raised primarily because the figures do not distinguish between forest types. The fear is that highly diverse habitats, such as tropical rainforest, may be experiencing an increase in deforestation which is being masked by large decreases in less biodiverse dry, open forest types. Because of this omission it is possible that many of the negative impacts of deforestation, such as habitat loss, are increasing despite a decline in deforestation. Some environmentalists have predicted that unless significant measures such as seeking out and protecting old growth forests that haven't been disturbed , are taken on a worldwide basis to preserve them, by 2030 there will only be ten percent remaining with another ten percent in a degraded condition. 80 percent will have been lost and with them the irreversible loss of hundreds of thousands of species.
Despite the ongoing reduction in deforestation over the past 30 years the process deforestation remains a serious global ecological problem and a major social and economic problem in many regions. 13 million hectares of forest are lost each year, 6 million hectares of which are forest that had been largely undisturbed by man . This results in a loss of habitat for wildlife as well as reducing or removing the ecosystem services provided by these forests.
The decline in the rate of deforestation also does not address the damage already caused by deforestation. Global deforestation increased sharply in the mid-1800s. and about half of the mature tropical forests, between 7.5 million to 8 million square kilometres (2.9 million to 3 million sq mi) of the original 15 million to 16 million square kilometres (5.8 million to 6.2 million sq mi) that until, 1947 covered the planet have been cleared.
The rate of deforestation also varies widely by region and despite a global decline in some regions, particularly in developing tropical nations, the rate of deforestation is increasing. For example, Nigeria lost 81% of its old-growth forests in just 15 years (1990- 2005). All of Africa is suffering deforestation at twice the world rate. The effects of deforestation are most pronounced in tropical rainforests . Brazil has lost 90-95% of its Mata Atlântica forest. In Central America, two-thirds of lowland tropical forests have been turned into pasture since 1950. Half of the Brazilian state of Rondonia's 243,000 km² have been affected by deforestation in recent years and tropical countries, including Mexico, India, Philippines, Indonesia, Thailand, Myanmar, Malaysia, Bangladesh, China, Sri Lanka, Laos, Nigeria, Congo, Liberia, Guinea, Ghana and the Côte d'Ivoire have lost large areas of their rainforest. Because the rates vary so much across regions the global decline in deforestation rates does not necessarily indicate that the negative effects of deforestation are also declining.
Deforestation trends could follow the Kuznets curve however even if true this is problematic in so-called hot-spots because of the risk of irreversible loss of non-economic forest values for example valuable habitat or species loss.
Deforestation is a contributor to global warming, and is often cited as one of the major causes of the enhanced greenhouse effect. Tropical deforestation is responsible for approximately 20% of world greenhouse gas emissions. According to the Intergovernmental Panel on Climate Change deforestation, mainly in tropical areas, account for up to one-third of total anthropogenic carbon dioxide emissions. Trees and other plants remove carbon (in the form of carbon dioxide) from the atmosphere during the process of photosynthesis and release it back into the atmosphere during normal respiration. Only when actively growing can a tree or forest remove carbon over an annual or longer timeframe. Both the decay and burning of wood releases much of this stored carbon back to the atmosphere. In order for forests to take up carbon, the wood must be harvested and turned into long-lived products and trees must be re-planted. Deforestation may cause carbon stores held in soil to be released. Forests are stores of carbon and can be either sinks or sources depending upon environmental circumstances. Mature forests alternate between being net sinks and net sources of carbon dioxide (see carbon dioxide sink and carbon cycle).
Reducing emissions from the tropical deforestation and forest degradation (REDD) in developing countries has emerged as new potential to complement ongoing climate policies. The idea consists in providing financial compensations for the reduction of greenhouse gas (GHG) emissions from deforestation and forest degradation".
The worlds rain forests are widely believed by laymen to contribute a significant amount of world's oxygen although it is now accepted by scientists that rainforests contribute little net oxygen to the atmosphere and deforestation will have no effect whatsoever on atmospheric oxygen levels. However, the incineration and burning of forest plants in order to clear land releases tonnes of CO2 which contributes to global warming.
The water cycle is also affected by deforestation. Trees extract groundwater through their roots and release it into the atmosphere. When part of a forest is removed, the trees no longer evaporate away this water, resulting in a much drier climate. Deforestation reduces the content of water in the soil and groundwater as well as atmospheric moisture. Deforestation reduces soil cohesion, so that erosion, flooding and landslides ensue. Forests enhance the recharge of aquifers in some locales, however, forests are a major source of aquifer depletion on most locales.
Shrinking forest cover lessens the landscape's capacity to intercept, retain and transpire precipitation. Instead of trapping precipitation, which then percolates to groundwater systems, deforested areas become sources of surface water runoff, which moves much faster than subsurface flows. That quicker transport of surface water can translate into flash flooding and more localized floods than would occur with the forest cover. Deforestation also contributes to decreased evapotranspiration, which lessens atmospheric moisture which in some cases affects precipitation levels down wind from the deforested area, as water is not recycled to downwind forests, but is lost in runoff and returns directly to the oceans. According to one preliminary study, in deforested north and northwest China, the average annual precipitation decreased by one third between the 1950s and the 1980s.
Trees, and plants in general, affect the water cycle significantly:
As a result, the presence or absence of trees can change the quantity of water on the surface, in the soil or groundwater, or in the atmosphere. This in turn changes erosion rates and the availability of water for either ecosystem functions or human services.
The forest may have little impact on flooding in the case of large rainfall events, which overwhelm the storage capacity of forest soil if the soils are at or close to saturation.
Tropical rainforests produce about 30% of our planets fresh water.
Undisturbed forest has very low rates of soil loss, approximately 2 metric tons per square kilometre (6 short tons per square mile). Deforestation generally increases rates of soil erosion, by increasing the amount of runoff and reducing the protection of the soil from tree litter. This can be an advantage in excessively leached tropical rain forest soils. Forestry operations themselves also increase erosion through the development of roads and the use of mechanized equipment.
China's Loess Plateau was cleared of forest millennia ago. Since then it has been eroding, creating dramatic incised valleys, and providing the sediment that gives the Yellow River its yellow color and that causes the flooding of the river in the lower reaches (hence the river's nickname 'China's sorrow').
Removal of trees does not always increase erosion rates. In certain regions of southwest US, shrubs and trees have been encroaching on grassland. The trees themselves enhance the loss of grass between tree canopies. The bare intercanopy areas become highly erodible. The US Forest Service, in Bandelier National Monument for example, is studying how to restore the former ecosystem, and reduce erosion, by removing the trees.
Tree roots bind soil together, and if the soil is sufficiently shallow they act to keep the soil in place by also binding with underlying bedrock. Tree removal on steep slopes with shallow soil thus increases the risk of landslides, which can threaten people living nearby. However most deforestation only affects the trunks of trees, allowing for the roots to stay rooted, negating the landslide.
Deforestation results in declines in biodiversity. The removal or destruction of areas of forest cover has resulted in a degraded environment with reduced biodiversity. Forests support biodiversity, providing habitat for wildlife; moreover, forests foster medicinal conservation. With forest biotopes being irreplaceable source of new drugs (such as taxol), deforestation can destroy genetic variations (such as crop resistance) irretrievably.
Since the tropical rainforests are the most diverse ecosystems on earth and about 80% of the world's known biodiversity could be found in tropical rainforests removal or destruction of significant areas of forest cover has resulted in a degraded environment with reduced biodiversity.
Scientific understanding of the process of extinction is insufficient to accurately to make predictions about the impact of deforestation on biodiversity. Most predictions of forestry related biodiversity loss are based on species-area models, with an underlying assumption that as forest are declines species diversity will decline similarly. However, many such models have been proven to be wrong and loss of habitat does not necessarily lead to large scale loss of species. Species-area models are known to overpredict the number of species known to be threatened in areas where actual deforestation is ongoing, and greatly overpredict the number of threatened species that are widespread.
It has been estimated that we are losing 137 plant, animal and insect species every single day due to rainforest deforestation, which equates to 50,000 species a year. Others state that tropical rainforest deforestation is contributing to the ongoing Holocene mass extinction. The known extinction rates from deforestation rates are very low, approximately 1 species per year from mammals and birds which extrapolates to approximately 23000 species per year for all species. Predictions have been made that more than 40% of the animal and plant species in Southeast Asia could be wiped out in the 21st century, with such predictions called into questions by 1995 data that show that within regions of Southeast Asia much of the original forest has been converted to monospecific plantations but potentially endangered species are very low in number and tree flora remains widespread and stable.
Damage to forests and other aspects of nature could halve living standards for the world's poor and reduce global GDP by about 7% by 2050, a major report concluded at the Convention on Biological Diversity (CBD) meeting in Bonn. Historically utilization of forest products, including timber and fuel wood, have played a key role in human societies, comparable to the roles of water and cultivable land. Today, developed countries continue to utilize timber for building houses, and wood pulp for paper. In developing countries almost three billion people rely on wood for heating and cooking. The forest products industry is a large part of the economy in both developed and developing countries. Short-term economic gains made by conversion of forest to agriculture, or over-exploitation of wood products, typically leads to loss of long-term income and long term biological productivity (hence reduction in nature's services). West Africa, Madagascar, Southeast Asia and many other regions have experienced lower revenue because of declining timber harvests. Illegal logging causes billions of dollars of losses to national economies annually.
The new procedures to get amounts of wood are causing more harm to the economy and overpowers the amount of money spent by people employed in logging. According to a study, "in most areas studied, the various ventures that prompted deforestation rarely generated more than US$5 for every ton of carbon they released and frequently returned far less than US $1." The price on the European market for an offset tied to a one-ton reduction in carbon is 23 euro (about $35).
See also: Timeline of environmental events.
Deforestation has been practiced by humans for tens of thousands of years before the beginnings of civilization. Fire was the first tool that allowed humans to modify the landscape. The first evidence of deforestation appears in the Mesolithic period. It was probably used to convert closed forests into more open ecosystems favourable to game animals. With the advent of agriculture, fire became the prime tool to clear land for crops. In Europe there is little solid evidence before 7000 BC. Mesolithic foragers used fire to create openings for red deer and wild boar. In Great Britain shade tolerant species such as oak and ash are replaced in the pollen record by hazels, brambles, grasses and nettles. Removal of the forests led to decreased transpiration resulting in the formation of upland peat bogs. Widespread decrease in elm pollen across Europe between 8400-8300 BC and 7200-7000 BC, starting in southern Europe and gradually moving north to Great Britain, may represent land clearing by fire at the onset of Neolithic agriculture.The Neolithic period saw extensive deforestation for farming land. Stone axes were being made from about 3000 BC not just from flint, but from a wide variety of hard rocks from across Britain and North America as well. They include the noted Langdale axe industry in the English Lake District, quarries developed at Penmaenmawr in North Wales and numerous other locations. Rough-outs were made locally near the quarries, and some were polished locally to give a fine finish. This step not only increased the mechanical strength of the axe, but also made penetration of wood easier. Flint was still used from sources such as Grimes Graves but from many other mines across Europe.
Throughout most of history, humans were hunter gatherers who hunted within forests. In most areas, such as the Amazon, the tropics, Central America, and the Caribbean, only after shortages of wood and other forest products occur are policies implemented to ensure forest resources are used in a sustainable manner.
In ancient Greece, Tjeered van Andel and co-writers summarized three regional studies of historic erosion and alluviation and found that, wherever adequate evidence exists, a major phase of erosion follows, by about 500-1,000 years the introduction of farming in the various regions of Greece, ranging from the later Neolithic to the Early Bronze Age. The thousand years following the mid-first millennium BCE saw serious, intermittent pulses of soil erosion in numerous places. The historic silting of ports along the southern coasts of Asia Minor (e.g. Clarus, and the examples of Ephesus, Priene and Miletus, where harbors had to be abandoned because of the silt deposited by the Meander) and in coastal Syria during the last centuries BC.
Easter Island has suffered from heavy soil erosion in recent centuries, aggravated by agriculture and deforestation. Jared Diamond gives an extensive look into the collapse of the ancient Easter Islanders in his book Collapse. The disappearance of the island's trees seems to coincide with a decline of its civilization around the 17th and 18th century.
The famous silting up of the harbor for Bruges, which moved port commerce to Antwerp, also follow a period of increased settlement growth (and apparently of deforestation) in the upper river basins. In early medieval Riez in upper Provence, alluvial silt from two small rivers raised the riverbeds and widened the floodplain, which slowly buried the Roman settlement in alluvium and gradually moved new construction to higher ground; concurrently the headwater valleys above Riez were being opened to pasturage.
A typical progress trap is that cities were often built in a forested area providing wood for some industry (e.g. construction, shipbuilding, pottery). When deforestation occurs without proper replanting, local wood supplies become difficult to obtain near enough to remain competitive, leading to the city's abandonment, as happened repeatedly in Ancient Asia Minor. The combination of mining and metallurgy often went along this self-destructive path.
Meanwhile most of the population remaining active in (or indirectly dependent on) the agricultural sector, the main pressure in most areas remained land clearing for crop and cattle farming; fortunately enough wild green was usually left standing (and partially used, e.g. to collect firewood, timber and fruits, or to graze pigs) for wildlife to remain viable, and the hunting privileges of the elite (nobility and higher clergy) often protected significant woodlands.
Major parts in the spread (and thus more durable growth) of the population were played by monastical 'pioneering' (especially by the Benedictine and Commercial orders) and some feudal lords actively attracting farmers to settle (and become tax payers) by offering relatively good legal and fiscal conditions - even when they did so to launch or encourage cities, there always was an agricultural belt around and even quite some within the walls.When on the other hand demography took a real blow by such causes as the Black Death or devastating warfare (e.g. Genghis Khan's Mongol hordes in eastern and central Europe, Thirty Years' War in Germany) this could lead to settlements being abandoned, leaving land to be reclaimed by nature, even though the secondary forests usually lacked the original biodiversity.
From 1100 to 1500 AD significant deforestation took place in Western Europe as a result of the expanding human population. The large-scale building of wooden sailing ships by European (coastal) naval owners since the 15th century for exploration, colonisation, slave trade - and other trade on the high seas and (often related) naval warfare (the failed invasion of England by the Spanish Armada in 1559 and the battle of Lepanto 1571 are early cases of huge waste of prime timber; each of Nelson's Royal navy war ships at Trafalgar had required 6,000 mature oaks) and piracy meant that whole woody regions were over-harvested, as in Spain, where this contributed to the paradoxical weakening of the domestic economy since Columbus' discovery of America made the colonial activities (plundering, mining, cattle, plantations, trade ...) predominant.
In Changes in the Land (1983), William Cronon collected 17th century New England Englishmen's reports of increased seasonal flooding during the time that the forests were initially cleared, and it was widely believed that it was linked with widespread forest clearing upstream.
The massive use of charcoal on an industrial scale in Early Modern Europe was a new acceleration of the onslaught on western forests; even in Stuart England, the relatively primitive production of charcoal has already reached an impressive level. For ship timbers, Stuart England was so widely deforested that it depended on the Baltic trade and looked to the untapped forests of New England to supply the need. In France, Colbert planted oak forests to supply the French navy in the future; as it turned out, as the oak plantations matured in the mid-nineteenth century, the masts were no longer required.
Specific parallels are seen in twentieth century deforestation occurring in many developing nations.
The difficulties of estimating deforestation rates are nowhere more apparent than in the widely varying estimates of rates of rainforest deforestation. At one extreme Alan Grainger, of Leeds University, argues that there is no credible evidence of any longterm decline in rainforest area while at the other some environmental groups argue that one fifth of the world's tropical rainforest was destroyed between 1960 and 1990, that rainforests 50 years ago covered 14% of the worlds land surface and have been reduced to 6%. and that all tropical forests will be gone by the year 2090 . While the FAO states that the annual rate of tropical closed forest loss is declining (FAO data are based largely on reporting from forestry departments of individual countries) from 8 million has in the 1980s to 7 million in the 1990s some environmentalists are stating that rainforest are being destroyed at an ever-quickening pace. The London-based Rainforest Foundation notes that "the UN figure is based on a definition of forest as being an area with as little as 10% actual tree cover, which would therefore include areas that are actually savannah-like ecosystems and badly damaged forests."
These divergent viewpoints are the result of the uncertainties in the extent of tropical deforestation. For tropical countries, deforestation estimates are very uncertain and could be in error by as much as +/- 50% while based on satellite imagery, the rate of deforestation in the tropics is 23% lower than the most commonly quoted rates . Conversely a new analysis of satellite images reveal that deforestation of the Amazon rainforest is twice as fast as scientists previously estimated. The extent of deforestation that has occurred in West Africa during the twentieth century is currently being hugely exaggerated .
Despite these uncertainties there is agreement that development of rainforests remains a significant environmental problem. Up to 90% of West Africa's coastal rainforests have disappeared since1900. In South Asia, about 88% of the rainforests have been lost. Much of what of the world's rainforests remains is in the Amazon basin, where the Amazon Rainforest covers approximately 4 million square kilometres. The regions with the highest tropical deforestation rate between 2000 and 2005 were Central America -- which lost 1.3% of its forests each year -- and tropical Asia. In Central America, 40% of all the rainforests have been lost in the last 40 years. Madagascar has lost 90% of its eastern rainforests. As of 2007, less than 1% of Haiti's forests remain. Several countries, notably Brazil, have declared their deforestation a national emergency.
From about the mid-1800s, around 1852, the planet has experienced an unprecedented rate of change of destruction of forests worldwide. More than half of the mature tropical forests that back in some thousand years ago covered the planet have been cleared.
A January 30, 2009 New York Times article stated, "By one estimate, for every acre of rain forest cut down each year, more than 50 acres of new forest are growing in the tropics..." The new forest includes secondary forest on former farmland and so-called degraded forest.
Africa is suffering deforestation at twice the world rate, according to the U.N. Environment Programme (UNEP). Some sources claim that deforestation have already wipedout roughly 90% of the West Africa's original forests. Deforestation is accelerating in Central Africa. According to the FAO, Africa lost the highest percentage of tropical forests of any continent. According to the figures from the FAO (1997), only 22.8% of West Africa's moist forests remain, much of this degraded. Massive deforestation threatens food security in some African countries. Africa experiences one of the highest rates of deforestation due to 90% of its population being dependent on wood for wood-fuel energy as the main source of heating and cooking. .
Research carried out by WWF International in 2002 shows that in Africa, rates of illegal logging vary from 50% for Cameroon and Equatorial Guinea to 70% in Gabon and 80% in Liberia – where revenues from the timber industry also fuelled the civil war.
See main article: Deforestation in Ethiopia. The main cause of deforestation in Ethiopia, located in East Africa, is a growing population and subsequent higher demand for agriculture, livestock production and fuel wood. Other reasons include low education and inactivity from the government, although the current government has taken some steps to tackle deforestation. Organizations such as Farm Africa are working with the federal and local governments to create a system of forest management. Ethiopia, the third largest country in Africa by population, has been hit by famine many times because of shortages of rain and a depletion of natural resources. Deforestation has lowered the chance of getting rain, which is already low, and thus causes erosion. Bercele Bayisa, an Ethiopian farmer, offers one example why deforestation occurs. He said that his district was forested and full of wildlife, but overpopulation caused people to come to that land and clear it to plant crops, cutting all trees to sell as fire wood.
Ethiopia has lost 98% of its forested regions in the last 50 years. At the beginning of the 20th century, around 420,000 km² or 35% of Ethiopia's land was covered with forests. Recent reports indicate that forests cover less than 14.2% or even only 11.9% now. Between 1990 and 2005, the country lost 14% of its forests or 21,000 km².
Deforestation with resulting desertification, water resource degradation and soil loss has affected approximately 94% of Madagascar's previously biologically productive lands. Since the arrival of humans 2000 years ago, Madagascar has lost more than 90% of its original forest. Most of this loss has occurred since independence from the French, and is the result of local people using slash-and-burn agricultural practises as they try to subsist. Largely due to deforestation, the country is currently unable to provide adequate food, fresh water and sanitation for its fast growing population.
See main article: Deforestation in Nigeria. According to the FAO, Nigeria has the world's highest deforestation rate of primary forests. It has lost more than half of its primary forest in the last five years. Causes cited are logging, subsistence agriculture, and the collection of fuel wood. Almost 90% of West Africa's rainforest has been destroyed.
Iceland has undergone extensive deforestation since Vikings settled in the ninth century. As a result, vast areas of vegetation and land has degraded, and soil erosion and desertification has occurred. As much as half of the original vegetative cover has been destroyed, caused in part by overexploitation, logging and overgrazing under harsh natural conditions. About 95% of the forests and woodlands once covering at least 25% of the area of Iceland may have been lost. Afforestation and revegetation has restored small areas of land.
Victoria and NSW's remnant red gum forests including the Murray River's Barmah-Millewa, are increasingly being clear-felled using mechanical harvesters, destroying already rare habitat. Macnally estimates that approximately 82% of fallen timber has been removed from the southern Murray Darling basin, and the Mid-Murray Forest Management Area (including the Barmah and Gunbower forests) provides about 90% of Victoria's red gum timber.
One of the factors causing the loss of forest is expanding urban areas. Littoral Rainforest growing along coastal areas of eastern Australia is now rare due to ribbon development to accommodate the demand for seachange lifestyles.
See main article: Deforestation in Brazil. There is no agreement on what drives deforestation in Brazil, though a broad consensus exists that expansion of croplands and pastures is important. Increases in commodity prices may increase the rate of deforestation Recent development of a new variety of soybean has led to the displacement of beef ranches and farms of other crops, which, in turn, move farther into the forest. Certain areas such as the Atlantic Rainforest have been diminished to just 7% of their original size. Although much conservation work has been done, few national parks or reserves are efficiently enforced. Some 80% of logging in the Amazon is illegal.
In 2008, Brazil's Government has announced a record rate of deforestation in the Amazon. Deforestation jumped by 69% in 2008 compared to 2007's twelvemonths, according to official government data. Deforestation could wipe out or severely damage nearly 60% of the Amazon rainforest by 2030, says a new report from WWF.
One case of deforestation in Canada is happening in Ontario's boreal forests, near Thunder Bay, where 28.9% of a 19,000 km² of forest area had been lost in the last 5 years and is threatening woodland caribou. This is happening mostly to supply pulp for the facial tissue industry .
In Canada, less than 8% of the boreal forest is protected from development and more than 50% has been allocated to logging companies for cutting.
The forest loss is acute in Southeast Asia, the second of the world's great biodiversity hot spots. According to 2005 report conducted by the FAO, Vietnam has the second highest rate of deforestation of primary forests in the world second to only Nigeria. More than 90% of the old-growth rainforests of the Philippine archipelago have been cut.
Russia has the largest area of forests of any nation on Earth. There is little recent research into the rates of deforestation but in 1992 2 million hectares of forest was lost and in 1994 around 3 million hectares were lost. . The present scale of deforestation in Russia is most easily seen using Google Earth, areas nearer to China are most affected as it is the main market for the timber. . Deforestation in Russia is particularly damaging as the forests have a short growing season due to extremely cold winters and therefore will take longer to recover.
At present rates, tropical rainforests in Indonesia would be logged out in 10 years, Papua New Guinea in 13 to 16 years. There are significantly large areas of forest in Indonesia that are being lost as native forest is cleared by large multi-national pulp companies and being replaced by plantations. In Sumatra tens of thousands of square kilometres of forest have been cleared often under the command of the central government in Jakarta who comply with multi national companies to remove the forest because of the need to pay off international debt obligations and to develop economically. In Kalimantan, between 1991 and 1999 large areas of the forest were burned because of uncontrollable fire causing atmospheric pollution across South-East Asia. Every year, forest are burned by farmers (slash-and-burn techniques are used by between 200 and 500 million people worldwide) and plantation owners. A major source of deforestation is the logging industry, driven spectacularly by China and Japan. . Agricultural development programs in Indonesia (transmigration program) moved large populations into the rainforest zone, further increasing deforestation rates.
A joint UK-Indonesian study of the timber industry in Indonesia in 1998 suggested that about 40% of throughout was illegal, with a value in excess of $365 million. More recent estimates, comparing legal harvesting against known domestic consumption plus exports, suggest that 88% of logging in the country is illegal in some way. Malaysia is the key transit country for illegal wood products from Indonesia.
Prior to the arrival of European-Americans about one half of the United States land area was forest, about 4 million square kilometers (1 billion acres) in 1600. For the next 300 years land was cleared, mostly for agriculture at a rate that matched the rate of population growth. For every person added to the population, one to two hectares of land was cultivated. This trend continued until the 1920s when the amount of crop land stabilized in spite of continued population growth. As abandoned farm land reverted to forest the amount of forest land increased from 1952 reaching a peak in 1963 of 3,080,000 km² (762 million acres). Since 1963 there has been a steady decrease of forest area with the exception of some gains from 1997. Gains in forest land have resulted from conversions from crop land and pastures at a higher rate than loss of forest to development. Because urban development is expected to continue, an estimated 93,000 km² (23 million acres) of forest land is projected be lost by 2050 , a 3% reduction from 1997. Other qualitative issues have been identified such as the continued loss of old-growth forest, the increased fragmentation of forest lands, and the increased urbanization of forest land.
According to a report by Stuart L. Pimm the extent of forest cover in the Eastern United States reached its lowest point in roughly 1872 with about 48 percent compared to the amount of forest cover in 1620. Of the 28 forest bird species with habitat exclusively in that forest, Pimm claims 4 become extinct either wholly or mostly because of habitat loss, the passenger pigeon, Carolina parakeet, ivory-billed woodpecker, and Bachman's Warbler.
A key factor in controlling deforestation could come from the Kyoto Protocol. Avoided deforestation also known as Reduced Emissions from Deforestation and Degradation (REDD) could be implemented in a future Kyoto Protocol and allow the protection of a great amount of forests. At the moment, REDD is not yet implemented into any of the flexible mechanisms as CDM, JI or ET.
New methods are being developed to farm more intensively, such as high-yield hybrid crops, greenhouse, autonomous building gardens, and hydroponics. These methods are often dependent on chemical inputs to maintain necessary yields. In cyclic agriculture, cattle are grazed on farm land that is resting and rejuvenating. Cyclic agriculture actually increases the fertility of the soil. Intensive farming can also decrease soil nutrients by consuming at an accelerated rate the trace minerals needed for crop growth.
Deforestation presents multiple societal and environmental problems. The immediate and long-term consequences of global deforestation are almost certain to jeopardize life on Earth, as we know it.Some of these consequences include: loss of biodiversity; the destruction of forest-based-societies; and climatic disruption. For example, much loss of the Amazon Rainforest can cause enormous amounts of carbon dioxide to be released back into the atmosphere.
Efforts to stop or slow deforestation have been attempted for many centuries because it has long been known that deforestation can cause environmental damage sufficient in some cases to cause societies to collapse. In Tonga, paramount rulers developed policies designed to prevent conflicts between short-term gains from converting forest to farmland and long-term problems forest loss would cause, while during the seventeenth and eighteenth centuries in Tokugawa Japan the shoguns developed a highly sophisticated system of long-term planning to stop and even reverse deforestation of the preceding centuries through substituting timber by other products and more efficient use of land that had been farmed for many centuries. In sixteenth century Germany landowners also developed silviculture to deal with the problem of deforestation. However, these policies tend to be limited to environments with good rainfall, no dry season and very young soils (through volcanism or glaciation). This is because on older and less fertile soils trees grow too slowly for silviculture to be economic, whilst in areas with a strong dry season there is always a risk of forest fires destroying a tree crop before it matures.
In the areas where "slash-and-burn" is practiced, switching to "slash-and-char" would prevent the rapid deforestation and subsequent degradation of soils. The biochar thus created, given back to the soil, is not only a durable carbon sequestration method, but it also is an extremely beneficial amendment to the soil. Mixed with biomass it brings the creation of terra preta, one of the richest soils on the planet and the only one known to regenerate itself.
In many parts of the world, especially in East Asian countries, reforestation and afforestation are increasing the area of forested lands . The amount of woodland has increased in 22 of the world's 50 most forested nations. Asia as a whole gained 1 million hectares of forest between 2000 and 2005. Tropical forest in El Salvador expanded more than 20 percent between 1992 and 2001. Based on these trends global forest cover is expected to increase by 10 percent—an area the size of India—by 2050 .
In the People's Republic of China, where large scale destruction of forests has occurred, the government has in the past required that every able-bodied citizen between the ages of 11 and 60 plant three to five trees per year or do the equivalent amount of work in other forest services. The government claims that at least 1 billion trees have been planted in China every year since 1982. This is no longer required today, but March 12 of every year in China is the Planting Holiday. Also, it has introduced the Green Wall of China-project which aims to halt the expansion of the Gobi-desert through the planting of trees. However, due to the large percentage of trees dying off after planting (up to 75%), the project is not very successful and regular carbon ofsetting through the Flexible Mechanisms might have been a better option. There has been a 47-million-hectare increase in forest area in China since the 1970s . The total number of trees amounted to be about 35 billion and 4.55% of China's land mass increased in forest coverage. The forest coverage was 12% two decades ago and now is 16.55%. .
In western countries, increasing consumer demand for wood products that have been produced and harvested in a sustainable manner are causing forest landowners and forest industries to become increasingly accountable for their forest management and timber harvesting practices.
The Arbor Day Foundation's Rain Forest Rescue program is a charity that helps to prevent deforestation. The charity uses donated money to buy up and preserve rainforest land before the lumber companies can buy it. The Arbor Day Foundation then protects the land from deforestation. This also locks in the way of life of the primitive tribes living on the forest land. Organizations such as Community Forestry International, The Nature Conservancy, World Wide Fund for Nature, Conservation International, African Conservation Foundation and Greenpeace also focus on preserving forest habitats. Greenpeace in particular has also mapped out the forests that are still intact and published this information unto the internet. . HowStuffWorks in turn, made a more simple thematic map showing the amount of forests present just before the age of man (8000 years ago) and the current (reduced) levels of forest. This Greenpeace map thus created, as well as this thematic map from howstuffworks marks the amount of afforestation thus again required to repair the damage caused by man.
To meet the worlds demand for wood it has been suggested by forestry writers Botkins and Sedjo that high-yielding forest plantations are suitable. It has been calculated that plantations yielding 10 cubic meters per hectare annually could supply all the timber required for international trade on 5 percent of the world's existing forestland. By contrast natural forests produce about 1-2 cubic meters per hectare, therefore 5 to 10 times more forest land would be required to meet demand. Forester Chad Oliver has suggested a forest mosaic with high-yield forest lands interpersed with conservation land.
According to an international team of scientists, led by Pekka Kauppi, professor of environmental science and policy at Helsinki University, the deforestation already done could still be reverted by tree plantings (eg CDM & JI afforestation/reforestation projects) in 30 years. The conclusion was made, through analysis of data acquired from FAO.
Reforestation through tree planting (trough eg the noted CDM & JI A/R-projects), might take advantage of the changing precipitation due to climate change. This may be done through studying where the precipitation is perceived to be increased (see the globalis thematic map of the 2050 precipitation) and setting up reforestation projects in these locations. Especially areas such as Niger, Sierra Leone and Liberia are important candidates; in huge part because they also suffer from an expanding desert (the Sahara) and decreasing biodiversity (while being an important biodiversity hotspot).
While the preponderance of deforestation is due to demands for agricultural and urban use for the human population, there are some examples of military causes. One example of deliberate deforestation is that which took place in the U.S. zone of occupation in Germany after World War II. Before the onset of the Cold War defeated Germany was still considered a potential future threat rather than potential future ally. To address this threat, attempts were made to lower German industrial potential, of which forests were deemed an element. Sources in the U.S. government admitted that the purpose of this was the "ultimate destruction of the war potential of German forests." As a consequence of the practice of clear-felling, deforestation resulted which could "be replaced only by long forestry development over perhaps a century."
War can also be a cause of deforestation, either deliberately such as through the use of Agent Orange during the Vietnam War where, together with bombs and bulldozers, it contributed to the destruction of 44 percent of the forest cover, or inadvertently such as in the 1945 Battle of Okinawa where bombardment and other combat operations reduced the lush tropical landscape into "a vast field of mud, lead, decay and maggots". | http://everything.explained.at/Deforestation/ | 13 |
17 | This term, borrowed from French (also laisser-faire), means ‘Let (people) do (as they think best)’. This phrase expresses the ‘principle that government should not interfere with the actions of individuals especially in industrial affairs and in trade’ (Oxford English Dictionary). Much of the Government’s attitude to the Irish situation was determined by this fashionable philosophy of ‘political economy’ rather that by the facts. Ministers invoked the principles of laissez-faire, but in fact they did intervene, and often crudely. In the Government’s model of political economy, Ireland was an over-populated country where sub-division of land and dependence on the potato left peasant and landlord alike with too much idle time. Property owners should undertake the responsibilities of property. The lack of economic progress was seen as failure. Consequently, the solution to the Irish problem was to end the system of ‘easy existence’ by diversifying economic activity, stopping sub-division, reducing the role of the potato, and bringing men of energy and capital into the country.
In a period of crisis, such as the Famine, prejudice and fear were easily turned into policies. Ireland was caricatured for its poverty and seen as a possible threat to the economic prosperity of the United Kingdom. Britain was at this stage on the verge of industrial and imperial ascendancy and its leaders may have felt that it could be hampered by its closeness, geographically and politically, to an impoverished, over-populated, potato-fed, and priest-ridden Ireland.
Inquiries into the condition of Ireland in the nineteenth century focussed mostly on its poverty, its system of landholding, the size of its population and the backwardness of its agriculture, especially the continuing dependence on the potato. The debates that followed were shaped by the writings of some leading economists. One of the most influential doctrines was ‘political economy’; Adam Smith was its principal proponent; and he set out his principles in his influential book, An Inquiry into the Nature and Causes of the Wealth of Nations. He believed that the wealth of a nation could be increased if the market was free from constraints and Government intervention was kept to a minimum. He applied the same principle to the relationship between the Government and the individual and he used it to justify individualism and self-help. Adam Smith’s ideas were complex, but they were often reduced to the simple slogan, laissez-faire, meaning no Government interference.
Smith’s ideological heirs included Thomas Malthus, Edmund Burke, David Ricardo, Nassau Senior, Harriet Martineau and Jeremy Bentham.These writers developed their individual and frequently contradictory interpretation of ‘political economy’. Bentham summed up his principles: ‘Laissez-faire, in short, should be the general practice: every departure, unless required by some great good, is certain evil’. Government ministers from William Pitt to Lord John Russell were inspired by this philosophy. Burke in a memorandum to Pitt on the duty of Government not to intervene during a period of scarcity assured the Prime Minister that even God was on their side.
Paradoxically ‘political economy’ existed in a period of increasing government action. Government intervention was frequent, piecemeal, and measured. In the case of the 1834 English Poor Law the Government intervened to reduce costs. When it suited Government, laissez-faire could be doctrine; when it did not, as in the case of the Corn Laws and the Navigation Acts, it was ignored. One of its main attractions was that ‘ministers could take whatever suited them from political economy and reject whatever did not’. During the Famine, political economy was invoked to justify non-interference in the grain trade, following the disastrous blight of 1846–7. It had the strong support of political economists in the Whig Cabinet including Charles Wood and the Colonial Secretary, Earl Grey. At the height of the distress, the writings of Smith and Burke were sent to relief officers in Ireland, and they were encouraged to read them in their spare time.
During the decades before the Famine, much attention was focussed on Ireland’s poverty and its fast-growing population. The problems of the country were being reduced to the fashionable Malthusian equation of a fast-growing population and heavy dependence on a single resource—the potato—that made vice and misery inevitable. Not even the most pessimistic observers thought a major famine was imminent and some were optimistic about the prospects of the country.
Official observers in Britain, many influenced by Malthus’s ideas, were pessimistic about Ireland. The Census returns and other Government inquires confirmed that the country was suffering the many evils of heavy dependence on one crop, extensive poverty, and a fast-growing population. It was perhaps convenient and pragmatic to see Ireland, in Malthusian terms, as a society in crisis. To see Ireland as an economy trapped in a spiral of poverty, and social disaster as inevitable, was convenient: it made the Government and its officials appear blameless.
Bibliography. T. R. Malthus, Occasional papers of T.R. Malthus on Ireland, population and political economy (New York 1963). J. M. Goldstrom & L. A. Clarkson (ed), Irish population economy and society: essays in honour of the late K. H. Connell (Oxford 1981). Joel Mokyr, Why Ireland starved: a quantitative and analytical history of the Irish economy, 1800–1850 (London 1983). Cormac Ó Gráda, Ireland: a new economic history, 1780–1939 (Oxford 1994). | http://multitext.ucc.ie/d/Laissez-Faire | 13 |
18 | In the middle of the eighteenth century, Jews living in German territories were just beginning to feel the effects of the political, social and intellectual changes that would soon be recognized as the hallmarks of the modern world. Until this period, Jewish communities had been constituted as distinct and autonomous social, religious and legal entities within an essentially feudal social organization. The Jews were subjected to the will of the rulers of individual German states, who imposed onerous regulations, taxes and restrictions on their ability to marry and settle where they chose. Distinguished from the rest of the population by religious traditions and family structure, Jews lived under the authority of the Jewish community, wholly separate from the non-Jewish population. In the early 1780s, however, Enlightenment thinkers began to call for an end to the discrimination against Jews in Prussia and Austria. Most important among these voices was the high-ranking Prussian government official Christian Wilhelm Dohm (1751–1820), who argued that Jews be granted the same civil rights as those accorded to non-Jewish citizens. In his essay “On the Civil Betterment of the Jews” (1781) Dohm explained Jews’ moral “depravity” as the result of centuries of oppression. Only with the elimination of the oppressive conditions that produced their allegedly defective character, Dohm argued, would Jews be able to gradually overcome their “disabilities” and prove themselves to be useful citizens.
These changes in the intellectual realm coincided with broader social and political realignments of the period. In its attempt to centralize its power and eliminate intermediate corporate bodies such as guilds and estates, the emerging absolutist state also sought the dissolution of the autonomous Jewish community and the integration of its members into the larger social body. Thus the development of more modern social and political structures for German Jews arose as much from larger external factors as from a desire to address the particular situation of the Jews. Indeed, the practical reforms that were introduced by Emperor Joseph II in his “Tolerance Decree” of 1781 probably exerted a more immediate impact on the situation of Jews in parts of the German-speaking territories than did Enlightenment thought itself. Joseph II’s decree enacted the first legal measures to reduce legal restrictions on the Jewish population in parts of the Habsburg Empire. Despite the beginnings of a more consolidated state authority, Jewish legal status still varied within the territories of the German Empire until the unification of Germany in 1871.
Simultaneous with the political developments that gradually began to erode the legal barriers separating Jew from non-Jew were important changes that also took place within Jewish society itself. Influenced by the spirit of the Enlightenment, Jewish intellectuals began a new critical engagement with Jewish tradition and, in so doing, created a new cultural, social and intellectual framework and helped bring forth an invigorated public sphere that challenged the authority of the official Jewish community. A new intellectual elite emerged, distinct from the rabbinate and its influence. The combined impact of the centralizing absolutist state and the emergence of the European and Jewish Enlightenments marked the beginning of a change in the legal status of the Jews that would extend over more than a hundred-year period. Yet the progress of Jewish Emancipation in different territories was anything but linear. During periods of political liberalization, progress toward equal rights proceeded apace, but the process was set back during periods of conservative counterreaction.
Because of the protracted nature of the struggle for Jewish Emancipation, and the twin efforts to win both legal and social acceptance, the uneven development of Emancipation was a central defining experience for German Jews. Yet historians have generally treated the Emancipation of the Jews as an event of universal significance for German Jewry without paying significant attention to the gendered aspects of its unfolding. After a long and often painful process, Jewish men did finally achieve full political and civil rights with the unification of Germany in 1871. In principle, if not entirely in practice, this removed most remaining legal disabilities that had prevented the full integration of male Jews into German society. But at the time of Emancipation, Jewish women—like women in general—received no such rights and remained unable to vote until 1918. In fact, as men looked toward an era of increasing liberalization, women were politically disenfranchised as the result of a law that was in effect until 1908, banning women from joining political organizations. Although German women were citizens, their status was ultimately determined by the citizenship of their father or husband. The status of East European Jewish women was even more precarious, since many immigrant women were not permitted to become citizens at all. Within the Jewish community, Jewish women had to wait even longer to gain a voice and a vote. In Germany, Jewish women suffered the double indignity of sexism and antisemitism, with second-class status imposed both inside and outside the Jewish community.
If Emancipation affected Jewish men and women in distinct ways, the pace and extent to which Jews adapted themselves to the demands of German society also differed according to gender-specific patterns. Because social acceptance was made contingent upon the acquisition of the basic customs, behaviors and values of German society, Jewish men and women took different paths toward, and found different means for, becoming at once fully German and distinctly Jewish. Jewish men tended to adapt to the demands of middle-class society by abandoning public religious behaviors, including the observance of Jewish dietary laws and the prohibition of work on the Sabbath. They also concerned themselves less with Jewish learning and worship than with secular education, which they pursued with unparalleled enthusiasm. For women, the road to acculturation led to the formation of new roles inside and outside the home. Changes in family structure and employment patterns led to adjustments in the gender division of labor between the domestic and public spheres, and these changes in the family, religion and labor, in turn, affected the construction of gender roles within the Jewish community, and gender relations as a whole. Even the category “Jewish woman” was infused with new meanings that accorded with middle-class norms and ideals for the bourgeois German woman. In fulfilling their newly defined “woman’s nature,” Jewish women created a proliferation of voluntary associations, involved themselves in non-Jewish associations, and pioneered the field of social work. The path to becoming “German” and “German Jewish” thus proved to be profoundly gendered.
One of the earliest examples of a specifically female experience of the Enlightenment can be found in the Berlin salons of the late eighteenth century. Although the German Jewish Enlightenment is usually associated with the intellectual circle around Moses Mendelssohn and its literary output, most historical literature has treated the Enlightenment as an intellectual and socio-cultural phenomenon that has almost exclusively involved men. Yet during the last two decades of the eighteenth century, even as Mendelssohn was evolving his philosophical reformulation of Judaism, a small group of young women from Berlin’s small but influential Jewish upper class crafted a place for themselves at the very center of the city’s social and intellectual life. Jewish salionières, most notably Rachel Levin Varnhagen and Dorothea Schlegel Mendelssohn, hosted social and intellectual gatherings in their homes that brought together Jews and non-Jews, noblemen and commoners to socialize and exchange ideas. Creating a cultural space unprecedented in its openness to Jews and women, these salons appear to have existed only for a brief historical moment as if outside the normal social constraints that enforced the hierarchical organization of society around the axes of gender, class and religion.
Because a substantial number of these women converted to Christianity and entered into (often second) marriages with non-Jews, Jewish historians have sometimes been quick to condemn them for the betrayal of their people and faith. So traitorous were they, concluded Heinrich Graetz, that “these talented but sinful Jewish women did Judaism a service by becoming Christians” (Lowenstein, 109). Indeed, for their contemporaries, as well as for many historians, these women represented the embodiment of a larger set of social problems afflicting Berlin Jewish society. Whatever the exact mix of their motives for conversion and intermarriage—an ascent in social status, the promise of companionate marriage or liberation from their patriarchal families—the fact that contemporary observers and later historians held these boundary-crossing women responsible for the most visible symptoms of modern social change suggests the extent to which the transition from traditional Jewish society to modern Judaism was represented through the language of gender.
Among the earliest and most important trajectories for the progress of German Jewish acculturation was the modernization of Judaism. Although religious modernization by no means did away with gender hierarchy, it nevertheless altered gender expectations and gender roles, as well as popular forms of religious practice. Within traditional Judaism, those aspects of religious practice that had historically been invested with the greatest value were organized hierarchically along clearly gendered lines: the study of Torah and public prayer formed the religious centerpiece of Jewish life, and these acts were accessible only to men. Women’s religious activity tended to be less structured and more personal and took place primarily in the family, focusing on religious aspects of home life, the observance of Sabbath and holidays, and the maintenance of dietary and family purity laws. Though accorded importance within Judaism, these “domestic” practices of Judaism were lower in prestige than the more public practice of Judaism dominated by men.
The liberal religious reform movement, which has garnered so much attention in the historical literature, did not, however, fundamentally transform the role of women in Judaism. Reformers seeking to modernize Judaism in accordance with Enlightenment ideals and middle-class behavioral and aesthetic visions endeavored to make prayer services more attractive to women as well as men by changing the language of prayer from Hebrew to German and replacing Hebrew excurses on the law with uplifting preaching in German that was modeled on Protestant worship services. Equally important, greater attention was paid to women’s religious education, primarily through the inclusion of women in the newly introduced ritual of confirmation. There was even some discussion at the 1846 Rabbinic Assembly in Breslau of far-reaching changes that would have granted women greater religious equality. Yet when it came to practice, the nineteenth-century Reform movement failed to eliminate many of the traditional religious restrictions that kept women in a subordinate status. In the synagogue, women could still neither be counted in a prayer quorum nor called to the Torah, and they often remained seated apart and comfortably out of view in the women’s gallery. Despite pronouncements against the segregation of women, religious reformers ultimately made few substantial improvements in women’s status.
In addition to religious reform, religious modernization also includes those religious and cultural changes that resulted from the increased participation of women in religious associations outside the home and the formal sphere of the Jewish community. Indeed, it may well have been this phenomenon, more than religious reform itself, which affected gender relations more broadly and contributed to a more substantive reconfiguration of the traditional Jewish gender order. Beginning in the late eighteenth century, middle-class women began to create charitable associations, such as sick care and self-help societies, that mirrored both the form and content of male associations. According to the historian Maria Benjamin Baader, these new female voluntary organizations were made possible with the declining emphasis on traditional male learning that had once marked women as marginal. Expressing both Jewish and bourgeois values, women’s activity in this realm would, by the early twentieth century, lead into new professional opportunities as well as to the production of new forms of Jewish religious and ethnic expression.
Women’s activities in voluntary organizations, in turn, were linked to broader cultural shifts in the German bourgeoisie. Thus, in addition to being inflected by gender, it is important to note that the process of becoming a German Jewish woman was also affected by class status. Whereas on the eve of the modern era the majority of German Jews were poor, as they entered German society over the course of the nineteenth century, Jews aspired to join the class that was most suited to their skills in trade and commerce: the middle-class. Jews quickly embraced the ideal of the educated middle class that made culture, rather than birth, the defining character of class. While middle-class status for men was to be achieved through self-improvement and education (Bildung), the most important determinant of middle class respectability for women and for their husbands was her status as full-time “priestess of the home.”
Paradoxically, an important measure of specifically Jewish and middle class acculturation was a new form of family-centered Judaism that arose out of the strong emphasis placed on the family in bourgeois culture on the one hand, and the decline in traditional Jewish religious practice on the other. The nineteenth-century bourgeois ideal for the family was a prescriptive model based on a rigid gender-based division of labor that delimited women’s activities to the domestic sphere and men’s activity to the “public” arena. As an ideal, it quickly eclipsed the typical structure of premodern Jewish families where the boundaries between public and private remained more fluid. Of course, not all Jewish families could afford to imitate this model, since lower middle-class and working class families often had to rely on the work and wages of children and wives for the family economy. But the power of this construct as a universal model for Jewish family life is perhaps most evidenced by the fact that, since the mid-nineteenth century, the bourgeois family type has been viewed as the “traditional Jewish family.”
By the time the German states were joined in a federal system within the new German Empire in 1871, most of Germany’s Jews could proudly display their middle class status by pointing to their family life. Indeed, the research of Marion Kaplan has demonstrated how Jewish women managed the double task of transmitting the values and behaviors of the German bourgeoisie while helping to shape the Jewish identity of their children. Jewish women made sure their children learned the German classics and, at the same time, organized the observance of holidays, family gatherings and the religious and moral education of the children. Illustrating the family’s crucial role in the acculturation of German Jews, Kaplan’s research also suggests the extent to which the home was gradually being recast as the primary site for the transmission of Judaism. With the declining appeal of formal religious practice and institutions, including the synagogue, the Jewish mother, according to historian Jacob Toury, was expected to become the “protector of a new system of Jewish domestic culture” (Maurer, 147).
Although some historians suggest that Jewish men abandoned religious ritual and practice more quickly than women, by mid-century Jewish community leaders nevertheless began holding women increasingly accountable for assimilation, conversion and intermarriage—in short, for the decline of Judaism. This was the case despite the fact that the intermarriage and conversion rates of Jewish women remained lower than those of men through almost the entire nineteenth century. Even in the early twentieth century, twenty-two percent of Jewish men but only thirteen percent of Jewish women entered marriages with non-Jews. Whereas Jewish men who entered mixed marriages usually had middle-class incomes, Jewish women, by contrast, tended to marry non-Jews out of economic need or because of a lack of available male Jewish partners. And even though women’s intermarriage rates were lower than men’s, women in mixed marriages stood to lose their status in the official Jewish community, while men suffered no equivalent punishment. Male and female conversion rates similarly reflected the disproportionately high male intermarriage rates. Relatively few women converted before 1880, and when the rate increased, as it did during the years 1873–1906, women still accounted for only one quarter of all converts. In comparison with male converts, nearly double the number of women came from the lowest income categories. Rising female conversion rates appear to have coincided with the growth of secularization on the one hand, and women’s increasing participation in the workforce and ensuing encounter with antisemitism on the other. By 1912, women accounted for forty percent of all conversions.
Throughout the nineteenth and early twentieth centuries, Jewish girls received an education that was consonant with social expectations for women of their class. Until the 1890s, the only form of secular education available to girls was the elementary school and non-college-preparatory secondary school. Jewish girls of all classes attended either private or public elementary schools where they learned reading, writing, arithmetic and such “feminine” subjects as art, music and literature. From mid-century on, a disproportionately high percentage of Jewish girls attended girls’ secondary schools (Höhere Töchterschule) which tended to be associated with upward mobility and higher class status. Indeed, around the turn of the century, while 3.7 percent of non-Jewish girls in Prussia attended the Höhere Töchterschule, approximately forty-two percent of Jewish girls did. Upon completing school at the age of fifteen or sixteen, middle-class girls passed their time socializing, embroidering or doing volunteer work as they waited for their families to find them a suitable husband.
Even through the Imperial period, most middle-class Jewish marriages continued to be arranged either by marriage brokers or, more often, with the aid of parents and relatives. As a social institution, arranged marriage served as a means of locating Jewish marriage partners while simultaneously providing for the financial security of middle-class daughters and cementing economic alliances between families. By the end of the nineteenth century, the heavy emphasis placed on financial considerations in the search for marriage partners generated substantial criticism from within the Jewish community and particularly among young modern-minded women who wanted to choose their own life partners on the basis of romantic love. Beginning with the salon women in the eighteenth century, the decision to marry a non-Jewish man appears to have sometimes been driven at least in part by the ideal of companionate marriage. In other words, for some women, intermarriage represented not simply an act of betrayal, as it was sometimes perceived by observers, but in fact an act of independence, a rejection of a patriarchal social system that treated marriage as a financial and social transaction that was divorced from the individuals themselves.
Since the nineteenth-century ideology of separate spheres consigned women to the home, those women who desired access to higher education and professional training had a particularly difficult path to navigate. For both men and women, higher education offered a means of self-improvement that facilitated German Jewish acculturation together with the possibility for personal emancipation. Yet whereas young Jewish men had been permitted to attend college preparatory high schools (gymnasia) and universities since the early nineteenth century, Jewish women had been excluded from both institutions until the end of the century. It was not until the first decade of the twentieth that German universities began admitting women. In the three years following the opening of Prussian universities to women in 1908, Jewish women already accounted for eleven percent of the female student population. By the time of the Nazi accession to power in 1933, a high proportion of Jewish women received doctorates from German universities. One of the fields of study most in demand among Jewish women, and east European Jewish women in particular, was medicine. Philosophy was also the first choice of many Jewish women since it provided the required academic preparation for a teaching certificate. As one of the few careers considered socially acceptable for middle-class women, education continued to draw Jewish women despite the antisemitic discrimination they often faced. With somewhat less frequency, Jewish women also studied the social and natural sciences and law. Despite the relative prevalence of Jewish women at universities, however, their social acceptance did not proceed apace. Like men, Jewish women encountered widespread antisemitism at the university, but their sex proved to be an added obstacle in their path toward integration.
Because of the predominantly middle-class status of German Jews, fewer Jewish women were wage earners than non-Jewish women. But both single and married Jewish women did work outside the home, and they did so in growing numbers. The 1882 employment statistics for Prussia list only eleven percent of all Jewish women as part of the labor force, compared with twenty-one percent of non-Jewish women, but this figure masks the work of many more women who helped run family businesses or otherwise contributed to the household economy. In 1907, when the Prussian census included more of these invisible female workers, the employment rate was eighteen percent of Jewish women, compared with thirty percent of non-Jewish women. By the time of the Weimar Republic, with increased east European immigration, a worsening economy, and an increasing number of women working to support themselves, the gap between the Jewish and non-Jewish employment rate narrowed further, with twenty-seven percent of Jewish women now working, compared with thirty-four percent in the general population. Like Jewish men, middle-class Jewish women worked disproportionately within the commercial sector of the economy. But in contrast with native-born German women, east European immigrant working women were clustered in industrial labor, primarily in the tobacco and garment industries. In specifically low-status female occupations such as domestic service, east European immigrant women were significantly overrepresented.
One of the promising new employment opportunities for Jewish and non-Jewish women at the turn of the century was social work. Formulated by women themselves as an extension of the domestic sphere, social work involved, in the words of Alice Salomon, one of the Jewish founders of modern social work in Germany, “an assumption of duties for a wider circle than are usually performed by the mother in the home” (Taylor Allen, 213–214). Jewish women seemed to flock to the profession, evident in their overrepresentation within social work training colleges. Particularly during the Weimar Republic, social work stood out as a field generally free from the mounting antisemitism increasingly being felt in other professions. Among those Jewish women who trained as social workers, some elected to work with the working class, lower middle class and east European Jewish population sectors within the Jewish community that required, in the view of their middle-class patrons, the provision of health services, job training and “moral reform.” From their roles as organizers of mutual assistance and charitable work in the eighteenth century, middle-class Jewish women became, by the Weimar period, the agents of a rationalized and “scientific” social work, one that was viewed by its practitioners as the modern-day realization of the traditional Jewish ethic of charity. As a gendered sphere of Jewish communal activity, the social arena became not only a site where those in need received assistance, but also a form of Jewish social engagement that strengthened the bonds of solidarity and cohesion among those engaged in social work.
In Germany, this idea of “social motherhood” not only provided the intellectual foundation and political justification for the emergence of modern social work, but it also animated the German feminist movement from its early years until its collapse and cooptation under Hitler in 1933. Feminists’ conceptions of citizenship, rooted in distinctly organic notions of German citizenship, emphasized duties over rights and tended to define individual self-fulfillment in the context of community. Social motherhood also formed a central pillar of the German Jewish feminist movement that was founded by Bertha Pappenheim in 1904. The membership of the Jüdischer Frauenbund, which consisted primarily of middle-class married women, engaged in social work, provided career training for Jewish women, sought to combat White Slavery and fought for the equal participation of women in the Jewish community. Claiming the membership of more than twenty percent of German Jewish women, the Frauenbund became an increasingly important organization on the German Jewish scene until its dissolution by the Nazis in 1938.
Middle-class Jewish women who were less interested in joining their Jewish and feminist commitments could become active in the moderate wing of the German Women’s movement, whereas working-class and east European women tended to join unions or the socialist women’s movement. Within the bourgeois women’s movement, Jewish women assumed significant leadership roles: Fanny Lewald and Jenny Hirsch gave voice to the aspirations of the movement through their writings on the “Woman Question,” while Jeanette Schwerin (1852–1899), Lina Morgenstern, Alice Salomon and Henriette Fürth became important women’s rights leaders and social workers. It has been estimated that approximately one third of the leading German women’s rights activists were of Jewish ancestry.
The new democratic republic that was born amidst the catastrophe of German defeat in World War I promised Germans their first real possibility for liberal democratic governance. The constitution guaranteed equal rights to all its citizens, including full and complete equality for Jews and women. But the spirit of openness and tolerance enshrined in the constitution was quickly compromised by an eruption of virulent antisemitism that resulted in a growing economic and social exclusion of Jews, even as opportunities in some fields, such as politics and the professions, continued to expand. Weimar’s contradictory bequest to Jews—greater inclusion but also growing exclusion and intensified antisemitic rhetoric—was fueled by the ongoing economic and political instability of the period.
In addition to the political instability that dogged the Republic from its inception, social and economic changes ushered in by the war also led to shifting gender roles. Many more women entered the workforce out of economic necessity and young women also sought out new professional opportunities. These and other changes in turn gave rise to the widespread perception that Germany—and German Jewry—faced an unprecedented social crisis. Rising rates of juvenile delinquency and out-of-wedlock births, the decline in the number of marriages and numbers of children born, suggested to many middle class observers that the Jewish family could neither socially nor biologically reproduce itself. Nothing embodied the social threat posed by young women to the Jewish middle-class gender norms better than the image of the sexually liberated and financially independent “New Woman,” who reputedly rejected motherhood in favor of a hedonistic urban lifestyle. What is particularly significant in the 1920s is how the identification of social crisis, as in Berlin over one hundred years before, was conceptualized largely through the lens of gender.
Offering a counterpoint to the emancipated Jewish New Woman, male and female Jewish leaders placed new emphasis on the reproductive Jewish woman. Feminist leaders joined rabbis and eugenicists in calling for an increased Jewish birthrate and Jewish women’s organizations dedicated themselves to reversing Jewish women’s “self-imposed infertility” (von Ankum, 29). By reproducing Jews, women would be helping to fortify a declining Jewish community and fighting the rising tide of assimilation. In an age of assimilation, Jewish mothers had a vital role to play in the maintenance of Jewish difference itself.
In the construction of a redemptive Jewish femininity that would address the challenges of assimilation, Jewish women also sought to redefine the meaning of Jewish motherhood at a time when national identity among non-Jewish Germans was growing increasingly exclusionary. According to both male and female leaders at the time, a crucial part of a Jewish mother’s task in the 1920s was to educate her children in ways that would help reduce antisemitism, while simultaneously making her family a refuge from antisemitic hostility. Shaping a new form of Jewishness that could both resist the appeal of Gentile acceptance and minimize Gentile hatred became an important aspect of Jewish “women’s work” in the 1920s. Women were thus cast both as the problem and the solution, embodying both the threat of a barren future and the promise of collective renewal.
With the slide of the Weimar Republic into authoritarianism and ultimately dictatorship in the early 1930s, National Socialism signaled the end of democracy, women’s equality and Jewish emancipation in Germany. Although National Socialism targeted Jewish men and women equally, the impact of restrictive regulations, increased antisemitism and social exclusion affected Jewish men and women in ways that were often distinct. Marion Kaplan’s research on the 1930s shows how social exclusion experienced by men in the workplace appears to have had somewhat of a lesser impact than the increasing isolation from the informal social networks maintained by women. In addition, women often proved to be more attuned to the humiliations and suffering of their children. Perhaps less invested in their professional identities than their husbands, women were more willing to risk uncertainty abroad. Overall, women displayed greater adaptability than men in reorienting their expectations and their means of livelihood to accommodate new realities both at home and abroad. Ironically, it may have been women’s very subordinate status that made them more amenable to finding work that under other circumstances would have been considered beneath them.
Gender roles in Jewish families also shifted as families faced new and extreme economic and social realities. Women increasingly represented or defended their husbands and other male relatives with the authorities. In addition, many more women worked outside the home than before the Nazi period and became involved in Jewish self-help organizations that had been established after Hitler’s rise to power. Some had never worked before, while others retrained for work in Germany or abroad. Although women often wanted to leave Germany before their husbands came to share their view, they actually emigrated less frequently than men. Parents sent sons away to foreign countries more frequently than daughters, and it was women, more than men, who remained behind as the sole caretakers for elderly parents. Indeed, a large proportion of the elderly population that remained in Germany was made up of women. In 1939, there were 6,674 widowed men and 28,347 widowed women in the expanded Reich.
Although men and women were equally targeted for persecution and death, they were subjected to different humiliations, regulations and work requirements. Within certain types of mixed marriages, Jewish men faced greater dangers than women. In the case of childless intermarriages consisting of a Jewish woman and an “Aryan” man, the female Jewish partner was not subjected to the same anti-Jewish laws as the rest of the Jewish population. But a Jewish man with a female “Aryan” wife in such a marriage received no special privileges. With the onset of the war, German Jewish women began to suffer the kind of physical brutality that many of their husbands, fathers and brothers had endured during the 1930s. Overall, however, Jewish men were probably more vulnerable to physical attack than women. Although Jewish women who went into hiding could move about more freely and were in less danger of being discovered than men, it is speculated that fewer women than men actually went into hiding. Despite their equal status as subhuman in the eyes of the Nazis, Jewish men and women frequently labored to survive under different constraints. As was the case in other countries outside of Germany, Jewish women appear to have suffered the ultimate fate of death in disproportionately greater numbers.
Even for an historical event as defining as the Holocaust, gender analysis proves a valuable means for elucidating different reactions to persecution by men and women, as well as highlighting gender-distinctive experiences of emigration, hiding and surviving in the camps. To view German Jewish history from the Enlightenment through the Holocaust from a gender perspective deepens our understanding of history in general and provides us with a richer, more complex and more inclusive picture of the Jewish past.
Allen, Ann Taylor. Feminism and Motherhood in Germany 1890–1914. New Brunswick, New Jersey: 1991, 213–214; Ankum, Katharina von. “Between Maternity and Modernity: Jewish Femininity and the German-Jewish ‘Symbiosis.’” Shofar 17/4 (Summer 1999): 20–33; Baader, Maria Benjamin. “When Judaism Turned Bourgeois: Gender in Jewish Associational Life and in the Synagogue, 1750–1850.” Leo Baeck Institute Yearbook 46 (2001): 113–123; Fassmann, Irmgard Maya. Jüdinnen in der deutschen Frauenbewegung 1865–1919. New York: 1996; Freidenreich, Harriet. Female, Jewish and Educated: The Lives of Central European University Women. Bloomington: 2002; Hertz, Deborah. High Society in Old Regime Berlin. New Haven: 1988; Hyman, Paula. Gender and Assimilation in Modern Jewish History: the Role and Representation of Women. Seattle: 1992; Kaplan, Marion. Between Dignity and Despair: Jewish Life in Nazi Germany. New York: 1998; Idem. The Jewish Feminist Movement in Germany: The Campaigns of the Jüdischer Frauenbund, 1904–1938. Westport, CT: 1979; Idem. The Making of the Jewish Middle Class: Women, Family, and Identity in Imperial Germany. New York: 1991; Kaplan, Marion, ed. Geschichte des jüdischen Alltags in Deutschland. Vom 17. Jahrhundert bis 1945. Munich: 2003; Lowenstein, Steve. Berlin Jewish Community: Enlightenment, Family, Crisis 1770–1830. New York: 1994; Maurer, Trude. Die Entwicklung der jüdische Minderheit in Deutschland (1780–1933). Tübingen: 1992; Meyer, Michael, and Michael Brenner. German-Jewish History in Modern Times. New York: 1997, Vols 1–4; Quack, Sybille. Zuflucht Amerika. Zur Sozialgeschichte der Emigration deutsch-jüdischer Frauen in die USA 1933–1945. Bonn: 1995; Rahden, Till van. “Intermarriages, the ‘New Woman’ and the Situational Ethnicity of Breslau Jews from the 1870s to the 1920s.” Leo Baeck Institute Yearbook 46 (2001);125–150; Richarz, Monika. “Jewish Social Mobility in Germany during the Time of Emancipation (1790–1871).” Leo Baeck Institute Yearbook 20 (1975): 69–77; Springorum, Stefanie Schüler. “Deutsch-Jüdische Geschichte als Geschlechtergeschichte.” Transversal: Zeitschrift des David-Herzog-Centrums für jüdische Studien 1 (2003): 3–15; Usborne, Cornelie. “The New Woman and Generational Conflict: Perceptions of Young Women’s Sexual Mores in the Weimar Republic.” In Generations in Conflict: Youth Revolt and Generation Formation in Germany, 1779–1968, edited by Mark Roseman, 137–163. New York: 1995; Volkov, Shulamit. Die Juden in Deutschland 1780–1918. Munich: 1994; Idem. “Jüdische Assimilation und Eigenart im Kaiserreich.” In Jüdisches Leben und Antisemitismus im 19. und 20. Jahrhundert, edited by Shulamit Volkov. Munich: 1990, 131–145; Werthheimer, Jack. Unwelcome Strangers. New York: 1987; Zimmermann, Moshe. Die deutschen Juden, 1918–1945. Munich: 1997. | http://jwa.org/encyclopedia/article/germany-1750-1945 | 13 |
32 | This chapter has been published in the book INDIA & Southeast Asia to 1800.
For ordering information, please click here.
In the Andhra land Satavahana king Simuka overthrew the last Kanva king in 30 BC and according to the Puranas reigned for 23 years. The Andhras were called Dasyus in the Aitareya Brahmana, and they were criticized for being degraded Brahmins or outcastes by the orthodox. For three centuries the kingdom of the Satavahanas flourished except for a brief invasion by the Shaka clan of Kshaharata led by Bhumaka and Nahapana in the early 2nd century CE. The latter was overthrown as the Satavahana kingdom with its caste system was restored by Gautamiputra Satakarni about 125 CE; his mother claimed he rooted out Shakas (Scythians), Yavanas (Greeks and Romans), and Pahlavas (Parthians), and records praised Gautamiputra for being virtuous, concerned about his subjects, taxing them justly, and stopping the mixing of castes. His successor Pulumavi ruled for 29 years and extended Satavahana power to the mouth of the Krishna River.
Trade with the Romans was active from the first century CE when Pliny complained that 550 million sesterces went to India annually, mostly for luxuries like spices, jewels, textiles, and exotic animals. The Satavahana kingdom was ruled in small provinces by governors, who became independent when the Satavahana kingdom collapsed. An inscription dated 150 CE credits Shaka ruler Rudradaman with supporting the cultural arts and Sanskrit literature and repairing the dam built by the Mauryans. Rudradaman took back most of the territory the Satavahana king Gautamiputra captured from Nahapana, and he also conquered the Yaudheya tribes in Rajasthan. However, in the next century the warlike Yaudheyas became more powerful. The indigenous Nagas also were aggressive toward Shaka satraps in the 3rd century. In the Deccan after the Satavahanas, Takataka kings ruled from the 3rd century to the 6th.
Probably in the second half of the first century BC Kharavela conquered much territory for Kalinga in southeastern India and patronized Jainism. He was said to have spent much money for the welfare of his subjects and had the canal enlarged that had been built three centuries before by the Nandas. In addition to a large palace, a monastery was built at Pabhara, and caves were excavated for the Jains.
Late in the 1st century BC a line of Iranian kings known as the Pahlavas ruled northwest India. The Shaka (Scythian) Maues, who ruled for about 40 years until 22 CE, broke relations with the Iranians and claimed to be the great king of kings himself. Maues was succeeded by three Shaka kings whose reigns overlapped. The Parthian Gondophernes seems to have driven the last Greek king Hermaeus out of the Kabul valley and taken over Gandhara from the Shakas, and it was said that he received at his court Jesus' disciple Thomas. Evidence indicates that Thomas also traveled to Malabar about 52 CE and established Syrian churches on the west coast before crossing to preach on the east coast around Madras, where he was opposed and killed in 68.
However, the Pahlavas were soon driven out by Scythians Chinese historians called the Yue-zhi. Their Kushana tribal chief Kujula Kadphises, his son Vima Kadphises, and Kanishka (r. 78-101) gained control of the western half of northern India by 79 CE. According to Chinese history one of these kings demanded to marry a Han princess, but the Kushanas were defeated by the Chinese led by Ban Chao at the end of the 1st century. Kanishka, considered the founder of the Shaka era, supported Buddhism, which held its 4th council in Kashmir during his reign. A new form of Mahayana Buddhism with the compassionate saints (bodhisattvas) helping to save others was spreading in the north, while the traditional Theravada of saints (arhats) working for their own enlightenment held strong in southern regions. Several great Buddhist philosophers were favored at Kanishka's court, including Parshva, Vasumitra, and Ashvaghosha; Buddhist missions were sent to central Asia and China, and Kanishka was said to have died fighting in central Asia. Kushana power decreased after the reign of Vasudeva (145-176), and they became vassals in the 3rd century after being defeated by Shapur I of the Persian Sasanian dynasty.
In the great vehicle or way of Mahayana Buddhism the saint (bodhisattva) is concerned with the virtues of benevolence, character, patience, perseverance, and meditation, determined to help all souls attain nirvana. This doctrine is found in the Sanskrit Surangama Sutra of the first century CE. In a dialog between the Buddha and Ananda before a large gathering of monks, the Buddha declares that keeping the precepts depends on concentration, which enhances meditation and develops intelligence and wisdom. He emphasizes that the most important allurement to overcome is sexual thought, desire, and indulgence. The next allurement is pride of ego, which makes one prone to be unkind, unjust, and cruel. Unless one can control the mind so that even the thought of killing or brutality is abhorrent, one will never escape the bondage of the world. Killing and eating flesh must be stopped. No teaching that is unkind can be the teaching of the Buddha. Another precept is to refrain from coveting and stealing, and the fourth is not to deceive or tell lies. In addition to the three poisons of lust, hatred, and infatuation, one must curtail falsehood, slander, obscene words, and flattery.
Ashvaghosha was the son of a Brahmin and at first traveled around arguing against Buddhism until he was converted, probably by Parshva. Ashvaghosha wrote the earliest Sanskrit drama still partially extant; in the Shariputra-prakarana the Buddha converts Maudgalyayana and Sariputra by philosophical discussion. His poem Buddhacharita describes the life and teachings of the Buddha very beautifully.
The Awakening of Faith in the Mahayana is ascribed to Ashvaghosha. That treatise distinguishes two aspects of the soul as suchness (bhutatathata) and the cycle of birth and death (samsara). The soul as suchness is one with all things, but this cannot be described with any attributes. This is negative in its emptiness (sunyata) but positive as eternally transcendent of all intellectual categories. Samsara comes forth from this ultimate reality. Multiple things are produced when the mind is disturbed, but they disappear when the mind is quiet. The separate ego-consciousness is nourished by emotional and mental prejudices (ashrava). Since all beings have suchness, they can receive instructions from all Buddhas and Bodhisattvas and receive benefits from them. By the purity of enlightenment they can destroy hindrances and experience insight into the oneness of the universe. All Buddhas feel compassion for all beings, treating others as themselves, and they practice virtue and good deeds for the universal salvation of humanity in the future, recognizing equality among people and not clinging to individual existence. Thus the prejudices and inequities of the caste system were strongly criticized.
Mahayana texts were usually written in Sanskrit instead of Pali, and the Prajnaparamita was translated into Chinese as early as 179 CE by Lokakshema. This dialog of 8,000 lines in which the Buddha spoke for himself and through Subhuti with his disciples was also summarized in verse. The topic is perfect wisdom. Bodhisattvas are described as having an even and friendly mind, being amenable, straight, soft-spoken, free of perceiving multiplicity, and free of self-interest. Detached, they do not want gain or fame, and their hearts are not overcome by anger nor do they seek a livelihood in the wrong way. Like an unstained lotus in the water they return from concentration to the sense world to mature beings and purify the field with compassion for all living things. Having renounced a heavenly reward they serve the entire world, like a mother taking care of her child. Thought produced is dedicated to enlightenment. They do not wish to release themselves in a private nirvana but become the world's resting place by learning not to embrace anything. With a mind full of friendliness and compassion, seeing countless beings with heavenly vision as like creatures on the way to slaughter, a Bodhisattva impartially endeavors to release them from their suffering by working for the welfare of all beings.
Nagarjuna was also born into a Brahmin family and in the 2nd century CE founded the Madhyamika (Middle Path) school of Mahayana Buddhism, although he was concerned about Hinayanists too. He was a stern disciplinarian and expelled many monks from the community at Nalanda for not observing the rules. A division among his followers led to the development of the Yogachara school of philosophy. Nagarjuna taught that all things are empty, but he answered critics that this does not deny reality but explains how the world happens. Only from the absolute point of view is there no birth or annihilation. The Buddha and all beings are like the sky and are of one nature. All things are nothing but mind established as phantoms; thus blissful or evil existence matures according to good or evil actions.
Nagarjuna discussed ethics in his Suhrllekha. He considered ethics faultless and sublime as the ground of all, like the earth. Aware that riches are unstable and void, one should give; for there is no better friend than giving. He recommended the transcendental virtues of charity, patience, energy, meditation, and wisdom, while warning against avarice, deceit, illusion, lust, indolence, pride, greed, and hatred. Attaining patience by renouncing anger he felt was the most difficult. One should look on another's wife like one's mother, daughter or sister. It is more heroic to conquer the objects of the six senses than a mass of enemies in battle. Those who know the world are equal to the eight conditions of gain and loss, happiness and suffering, fame and dishonor, and blame and praise. A woman (or man), who is gentle as a sister, winning as a friend, caring as a mother, and obedient as a servant, one should honor as a guardian goddess (god). He suggested meditating on kindness, pity, joy, and equanimity, abandoning desire, reflection, happiness, and pain. The aggregates of form, perception, feeling, will, and consciousness arise from ignorance. One is fettered by attachment to religious ceremonies, wrong views, and doubt. One should annihilate desire as one would extinguish a fire in one's clothes or head. Wisdom and concentration go together, and for the one who has them the sea of existence is like a grove.
During the frequent wars that preceded the Gupta empire in the 4th century the Text of the Excellent Golden Light (Suvarnaprabhasottama Sutra) indicated the Buddhist attitude toward this fighting. Everyone should be protected from invasion in peace and prosperity. While turning back their enemies, one should create in the earthly kings a desire to avoid fighting, attacking, and quarreling with neighbors. When the kings are contented with their own territories, they will not attack others. They will gain their thrones by their past merit and not show their mettle by wasting provinces; thinking of mutual welfare, they will be prosperous, well fed, pleasant, and populous. However, when a king disregards evil done in his own kingdom and does not punish criminals, injustice, fraud, and strife will increase in the land. Such a land afflicted with terrible crimes falls into the power of the enemy, destroying property, families, and wealth, as men ruin each other with deceit. Such a king, who angers the gods, will find his kingdom perishing; but the king, who distinguishes good actions from evil, shows the results of karma and is ordained by the gods to preserve justice by putting down rogues and criminals in his domain even to giving up his life rather than the jewel of justice.
After 20 BC many kings ruled Sri Lanka (Ceylon) during a series of succession fights until Vasabha (r. 67-111 CE) of the Lambakanna sect established a new dynasty that would rule more than three centuries. Vasabha promoted the construction of eleven reservoirs and an extensive irrigation system. The island was divided briefly by his son and his two brothers, as the Chola king Karikala invaded; but Gajabahu (r. 114-36) united the country and invaded the Chola territory.
A treaty established friendly relations, and Hindu temples were built on Sri Lanka, including some for the chaste goddess immortalized in the Silappadikaram. Sri Lanka experienced peace and prosperity for 72 years, and King Voharika Tissa (r. 209-31) even abolished punishment by mutilation. However, when the Buddhist schism divided people, the king suppressed the new Mahayana doctrine and banished its followers. Caught in an intrigue with the queen, his brother Abhayanaga (r. 231-40) fled to India, and then with Tamils invaded Sri Lanka, defeated and killed his brother, took the throne, and married the queen. Gothabhaya (r. 249-62) persecuted the new Vetulya doctrine supported by monks at Abhayagirivihara by having sixty monks branded and banished. Their accounts of this cruelty led Sanghamitta to tutor the princes in such a way that when Mahasena (r. 274-301) became king, he confiscated property from the traditional Mahavihara monastery and gave it to Abhayagirivihara.
The Tamil epic poem called The Ankle Bracelet (Silappadikaram) was written about 200 CE by Prince Ilango Adigal, brother of King Shenguttuvan, who ruled the western coast of south India. Kovalan, the son of a wealthy merchant in Puhar, marries Kannaki, the beautiful daughter of a wealthy ship-owner. The enchanting Madhavi dances so well for the king that he gives her a wreath that she sells to Kovalan for a thousand gold kalanjus, making her his mistress. They sing songs to each other of love and lust until he notices hints of her other loves; so he withdraws his hands from her body and departs. Kovalan returns to his wife in shame for losing his wealth; but she gives him her valuable ankle bracelet, and they decide to travel to Madurai. Kannaki courageously accompanies him although it causes her feet to bleed. They are joined by the saintly woman Kavundi, and like good Jains they try not to step on living creatures as they walk. They meet a saintly man who tells them that no one can escape reaping the harvest grown from the seeds of one's actions.
In the woods a charming nymph tries to tempt Kovalan with a message from Madhavi, but his prayer causes her to confess and run away. A soothsayer calls Kannaki the queen of the southern Tamil land, but she only smiles at such ignorance. A priest brings a message from Madhavi asking for forgiveness and noting his leaving his parents. Kovalan has the letter sent to his parents to relieve their anguish. Leaving his wife with the saint Kavundi, Kovalan goes to visit the merchants, while Kavundi warns him that the merits of his previous lives have been exhausted; they must prepare for misfortune. Reaping what is sown, many fall into predicaments from pursuing women, wealth, and pleasure; thus sages renounce all desire for worldly things. A Brahmin tells Kovalan that Madhavi has given birth to his baby girl; he has done good deeds in the past, but he warns him he must pay for some errors committed in a past existence. Kovalan feels bad for wasting his youth and neglecting his parents. He goes to town to sell the ankle bracelet; a goldsmith tells him only the queen can purchase it, but the goldsmith tells King Korkai that he has found the man who stole his royal anklet. The king orders the thief put to death, and Kovalan is killed with a sword.
Kannaki weeping observes the spirit of her husband rise into the air, telling her to stay in life. She goes to King Korkai and proves her husband did not steal the anklet by showing him their anklet has gems not pearls. Filled with remorse for violating justice at the word of a goldsmith, the king dies, followed quickly in this by his queen. Kannaki goes out and curses the town as she walks around the city three times. Then she tears her left breast from her body and throws it in the dirt. A god of fire appears to burn the city, but she asks him to spare Brahmins, good men, cows, truthful women, cripples, the old, and children, while destroying evildoers. As the four genii who protect the four castes of Madurai depart, a conflagration breaks out. The goddess of Madurai explains to Kannaki that in a past life as Bharata her husband had renounced nonviolence and caused Sangaman to be beheaded, believing he was a spy. His wife cursed the killer, and now that action bore fruit. Kannaki wanders desolate for two weeks, confessing her crime. Then the king of heaven proclaims her a saint, and she ascends with Kovalan in a divine chariot.
King Shenguttuvan, who had conquered Kadambu, leaves Vanji and hears stories about a woman with a breast torn off suffering agony and how Madurai was destroyed. The king decides to march north to bring back a great stone on the crowned heads of two kings, Kanaka and Vijaya, who had criticized him; the stone is to be carved into the image of the beloved goddess. His army crosses the Ganges and defeats the northern kings. The saintly Kavundi fasts to death. The fathers of Kovalan and Kannaki both give up their wealth and join religious orders, and Madhavi goes into a Buddhist nunnery, followed later in this by her daughter. Madalan advises King Shenguttuvan to give up anger and criticizes him for contributing to war, causing the king to release prisoners and refund taxes. The Chola king notes how the faithful wife has proved the Tamil proverb that the virtue of women is of no use if the king fails to establish justice. Finally the author himself appears in the court of his brother Shenguttuvan and gives a list of moral precepts that begins:
Seek God and serve those who are near Him.
Do not tell lies.
Avoid eating the flesh of animals.
Do not cause pain to any living thing.
Be charitable, and observe fast days.
Never forget the good others have done to you.1
In a preamble added by a later commentator three lessons are drawn from this story: First, death results when a king strays from the path of justice; second, everyone must bow before a chaste and faithful wife; and third, fate is mysterious, and all actions are rewarded. Many sanctuaries were built in southern India and Sri Lanka to the faithful wife who became the goddess of chastity.
The Jain philosopher Kunda Kunda of the Digambara sect lived and taught sometime between the first and fourth centuries. He laid out his metaphysics in The Five Cosmic Constituents (Panchastikayasara). He noted that karmic matter brings about its own changes, as the soul by impure thoughts conditioned by karma does too. Freedom from sorrow comes from giving up desire and aversion, which cause karmic matter to cling to the soul, leading to states of existence in bodies with senses. Sense objects by perception then lead one to pursue them with desires or aversion, repeating the whole cycle. High ideals based on love, devotion, and justice, such as offering relief to the thirsty, hungry, and miserable, may purify the karmic matter; but anger, pride, deceit, coveting, and sensual pleasures interfere with calm thought, perception, and will, causing anguish to others, slander, and other evils. Meditating on the self with pure thought and controlled senses will wash off the karmic dust. Desire and aversion to pleasant and unpleasant states get the self bound by various kinds of karmic matter. The knowing soul associating with essential qualities is self-determined, but the soul led by desire for outer things gets bewildered and is other-determined.
Kunda Kunda discussed ethics in The Soul Essence (Samayasara). As long as one does not discern the difference between the soul and its thought activity, the ignorant will indulge in anger and other emotions that accumulate karma. The soul discerning the difference turns back from these. One with wrong knowledge takes the non-self for self, identifies with anger, and becomes the doer of karma. As the king has his warriors wage war, the soul produces, causes, binds, and assimilates karmic matter. Being affected by anger, pride, deceit, and greed, the soul becomes them. From the practical standpoint karma is attached in the soul, but from the real or pure perspective karma is neither bound nor attached to the soul; attachment to the karma destroys independence. The soul, knowing the karma is harmful, does not indulge them and in self-contemplation attains liberation. The soul is bound by wrong beliefs, lack of vows, passions, and vibratory activity. Kunda Kunda suggested that one does not cause misery or happiness to living beings by one's body, speech, mind, or by weapons, but living beings are happy or miserable by their own karma (actions). As long as one identifies with feelings of joy and sorrow and until soul realization shines out in the heart, one produces good and bad karma. Just as an artisan does not have to identify with performing a job, working with organs, holding tools, the soul can enjoy the fruit of karma without identifying.
In The Perfect Law (Niyamsara), Kunda Kunda described right belief, right knowledge, and right conduct that lead to liberation. The five vows are non-injury, truth, non-stealing, chastity, and non-possession. Renouncing passion, attachment, aversion, and other impure thoughts involves controlling the mind and speech with freedom from falsehood and restraining the body by not causing injury. The right conduct of repentance and equanimity is achieved by self-analysis, by avoiding transgressions and thoughts of pain and ill-will, and by self-contemplation with pure thoughts. Renunciation is practiced by equanimity toward all living beings with no ill feelings, giving up desires, controlling the senses, and distinguishing between the soul and material karma. A saint of independent actions is called an internal soul, but one devoid of independent action is called an external soul. The soul free from obstructions, independent of the senses, and liberated from good and bad karma is free from rebirth and eternal in the nirvana of perfect knowledge, bliss, and power.
After the disintegration in northern India in the third century CE, the Kushanas still ruled over the western Punjab and the declining Shakas over Gujarat and part of Malwa. Sri Lanka king Meghavarna (r. 301-28) sent gifts and asked permission to build a large monastery north of the Bodhi tree for Buddhist pilgrims that eventually housed more than a thousand priests. Sasanian king Shapur II fought and made a treaty with the Kushanas in 350, but he was defeated by them twice in 367-68. After two previous kings of the Gupta dynasty, Chandra-gupta I by marrying Kumaradevi, a Lichchhavi princess, inaugurated the Gupta empire in 320, launching campaigns of territorial conquest. This expansion was greatly increased by their son Samudra-gupta, who ruled for about forty years until 380, conquering nine republics in Rajasthan and twelve states in the Deccan of central India. Many other kingdoms on the frontiers paid taxes and obeyed orders. The Guptas replaced tribal customs with the caste system. Rulers in the south were defeated, captured, and released to rule as vassals. Local ruling councils under the Guptas tended to be dominated by commercial interests. In addition to his military abilities Samudra-gupta was a poet and musician, and inscriptions praised his charity.
His son Chandra-gupta II (r. 380-414) finally ended the foreign Shaka rule in the west so that his empire stretched from the Bay of Bengal to the Arabian Sea. He allied his family with the Nagas by marrying princess Kubernaga; after marrying Vakataka king Rudrasena II, his daughter ruled as regent there for 13 years. In the south the Pallavas ruled in harmony with the Guptas. The Chinese pilgrim Fa-hien described a happy and prosperous people not bothered by magistrates and rules; only those working state land had to pay a portion, and the king governed without using decapitation or corporal punishments. Kumara-gupta (r. 414-55) was apparently able to rule this vast empire without engaging in military campaigns. Only after forty years of peace did the threat of invading Hunas (White Huns) cause crown prince Skanda-gupta (r. 455-67) to fight for and restore Gupta fortunes by defeating the Huns about 460. After a struggle for the Gupta throne, Budha-gupta ruled for at least twenty years until about 500. Trade with the Roman empire had been declining since the 3rd century and was being replaced by commerce with southeast Asia. The empire was beginning to break up into independent states, such as Kathiawar and Bundelkhand, while Vakataka king Narendra-sena took over some Gupta territory.
Gupta decline continued as Huna chief Toramana invaded the Punjab and western India. His son Mihirakula succeeded as ruler about 515; according to Xuan Zang he ruled over India, and a Kashmir chronicle credited Mihirakula with conquering southern India and Sri Lanka. The Chinese ambassador Song-yun in 520 described the Hun king of Gandara as cruel, vindictive, and barbarous, not believing in the law of Buddha, having 700 war-elephants, and living with his troops on the frontier. About ten years later the Greek Cosmas from Alexandria wrote that the White Hun king had 2,000 elephants and a large cavalry, but his kingdom was west of the Indus River. However, Mihirakula was defeated by the Malwa chief Yashodharman. The Gupta king Narasimha-gupta Baladitya was also overwhelmed by Yashodharman and was forced to pay tribute to Mihirakula, according to Xuan Zang; but Baladitya later defeated Mihirakula, saving the Gupta empire from the Huns. Baladitya was also credited with building a great monastery at Nalanda. In the middle of the 6th century the Gupta empire declined during the reigns of its last two emperors, Kumara-gupta III and Vishnu-gupta. Gupta sovereignty was recognized in Kalinga as late as 569.
In the 4th century Vasubandhu studied and taught Sarvastivadin Buddhism in Kashmir, analyzing the categories of experience in the 600 verses of his Abhidharma-kosha, including the causes and ways to eliminate moral problems. Vasubandhu was converted to the Yogachara school of Mahayana Buddhism by his brother Asanga. Vasubandhu had a long and influential career as the abbot at Nalanda.
As an idealist Vasubandhu, summing up his ideas in twenty and thirty verses, found all experience to be in consciousness. Seeds are brought to fruition in the store of consciousness. Individuals are deluded by the four evil desires of their views of self as real, ignorance of self, self-pride, and self-love. He found good mental functions in belief, sense of shame, modesty, absence of coveting, energy, mental peace, vigilance, equanimity, and non-injury. Evil mental functions he listed as covetousness, hatred, attachment, arrogance, doubt, and false view; minor ones included anger, enmity, concealment, affliction, envy, parsimony, deception, fraud, injury, pride, high-mindedness, low-mindedness, unbelief, indolence, idleness, forgetfulness, distraction, and non-discernment. For Vasubandhu life is like a dream in which we create our reality in our consciousness; even the tortures of hell have no outward reality but are merely projections of consciousness. Enlightenment is when mental obstructions and projections are transcended without grasping; the habit-energies of karma, the six senses and their objects, and relative knowledge are all abandoned for perfect wisdom, purity, freedom, peace, and joy. Vasubandhu wrote that we can know other minds and influence each other for better and worse, because karma is intersubjective.
In 554 Maukhari king Ishana-varman claimed he won victories over the Andhras, Sulikas, and Gaudas. A Gurjara kingdom was founded in the mid-6th century in Rajputana by Harichandra, as apparently the fall of empires in northern India caused this Brahmin to exchange scriptures for arms. Xuan Zang praised Valabhi king Shiladitya I, who ruled about 580, for having great administrative ability and compassion. Valabhi hosted the second Jain council that established the Jain canon in the 6th century. Valabhi king Shiladitya III (r. 662-84) assumed an imperial title and conquered Gurjara. However, internal conflicts as well as Arab invasion destroyed the Valabhi kingdom by about 735. The Gurjara kingdom was also overrun by Arabs, but Pratihara king Nagabhata is credited with turning back the Muslim invaders in the northwest; he was helped in this effort by Gurjara king Jayabhata IV and Chalukya king Avanijanashraya-Pulakeshiraja in the south.
After Thaneswar king Prabhakara-vardhana (r. 580-606) died, his son Rajya-vardhana marched against the hostile Malava king with 10,000 cavalry and won; but according to Banabhatta, the king of Malava, after gaining his confidence with false civilities, had him murdered. His brother Harsha-vardhana (r. 606-47) swore he would clear the earth of Gaudas; starting with 5,000 elephants, 2,000 cavalry, and 50,000 infantry, his army grew as military conquests enabled him to become the most powerful ruler of northern India at Kanauj. Somehow Harsha's conflicts with Valabhi and Gurjara led to his war with Chalukya king Pulakeshin II; but his southern campaign was apparently a failure, and Sindh remained an independent kingdom.
However, in the east according to Xuan Zang by 643 Harsha had subjugated Kongoda and Orissa. That year the Chinese pilgrim observed two great assemblies, one at Kanauj and the other a religious gathering at Prayaga, where the distribution of accumulated resources drew twenty kings and about 500,000 people. Xuan Zang credited Harsha with building rest-houses for travelers, but he noted that the penalty for breaching the social morality or filial duties could be mutilation or exile. After Gauda king Shashanka's death Harsha had conquered Magadha, and he eventually took over western Bengal. Harsha also was said to have written plays, and three of them survive. Xuan Zang reported that he divided India's revenues into four parts for government expenses, public service, intellectual rewards, and religious gifts. During his reign the university in Nalanda became the most renowned center of Buddhist learning. However, no successor of Harsha-vardana is known, and apparently his empire ended with his life.
Wang-Xuan-zi gained help from Nepal against the violent usurper of Harsha's throne, who was sent to China as a prisoner; Nepal also sent a mission to China in 651. The dynasty called the Later Guptas for their similar names took over Magadha and ruled there for almost a century. Then Yashovarman brought Magadha under his sovereignty as he also invaded Bengal and defeated the ruler of Gauda. In 713 Kashmir king Durlabhaka sent an envoy to the Chinese emperor asking for aid against invading Arabs. His successor Chandrapida was able to defend Kashmir against Arab aggression. He was described as humane and just, but in his ninth year as king he was killed by his brother Tarapida, whose cruel and bloody reign lasted only four years. Lalitaditya became king of Kashmir in 724 and in alliance with Yashovarman defeated the Tibetans; but Lalitaditya and Yashovarman could not agree on a treaty; Lalitaditya was victorious, taking over Kanauj and a vast empire. The Arabs were defeated in the west, and Bengal was conquered in the east, though Lalitaditya's record was tarnished when he had the Gauda king of Bengal murdered after promising him safe conduct. Lalitaditya died about 760. For a century Bengal had suffered anarchy in which the strong devoured the weak.
Arabs had been repelled at Sindh in 660, but they invaded Kabul and Zabulistan during the Caliphate of Muawiyah (661-80). In 683 Kabul revolted and defeated the Muslim army, but two years later Zabul's army was routed by the Arabs. After Al-Hajjaj became governor of Iraq in 695 the combined armies of Zabul and Kabul defeated the Arabs; but a huge Muslim army returned to ravage Zabulistan four years later. Zabul paid tribute until Hajjaj died in 714. Two years before that, Hajjaj had equipped Muslim general Muhammad-ibn-Qasim for a major invasion of Sindh which resulted in the chiefs accepting Islam under sovereignty of the new Caliph 'Umar II (717-20).
Pulakeshin I ruled the Chalukyas for about thirty years in the middle of the 6th century. He was succeeded by Kirtivarman I (r. 566-97), who claimed he destroyed the Nalas, Mauryas, and Kadambas. Mangalesha (r. 597-610) conquered the Kalachuris and Revatidvipa, but he lost his life in a civil war over the succession with his nephew Pulakeshin II (r. 610-42). Starting in darkness enveloped by enemies, this king made Govinda an ally and regained the Chalukya empire by reducing Kadamba capital Vanavasi, the Gangas, and the Mauryas, marrying a Ganga princess. In the north Pulakeshin II subdued the Latas, Malavas, and Gurjaras; he even defeated the mighty Harsha of Kanauj and won the three kingdoms of Maharashtra, Konkana, and Karnata. After conquering the Kosalas and Kalingas, an Eastern Chalukya dynasty was inaugurated by his brother Kubja Vishnuvardhana and absorbed the Andhra country when Vishnukundin king Vikramendra-varman III was defeated. Moving south, Pulakeshin II allied himself with the Cholas, Keralas, and Pandyas in order to invade the powerful Pallavas. By 631 the Chalukya empire extended from sea to sea. Xuan Zang described the Chalukya people as stern and vindictive toward enemies, though they would not kill those who submitted. They and their elephants fought while inebriated, and Chalukya laws did not punish soldiers who killed. However, Pulakeshin II was defeated and probably killed in 642 when the Pallavas in retaliation for an attack on their capital captured the Chalukya capital at Badami.
For thirteen years the Pallavas held some territory while Chalukya successors fought for the throne. Eventually Vikramaditya I (r. 655-81) became king and recovered the southern part of the empire from the Pallavas, fighting three Pallava kings in succession. He was followed by his son Vinayaditya (r. 681-96), whose son Vijayaditya (r. 696-733) also fought with the Pallavas. Vijayaditya had a magnificent temple built to Shiva and donated villages to Jain teachers. His son Vikramaditya II (r. 733-47) also attacked the Pallavas and took Kanchi, but instead of destroying it he donated gold to its temples. His son Kirtivarman II (r. 744-57) was the last ruler of the Chalukya empire, as he was overthrown by Rashtrakuta king Krishna I. However, the dynasty of the Eastern Chalukyas still remained to challenge the Rashtrakutas. In the early 8th century the Chalukyas gave refuge to Zoroastrians called Parsis, who had been driven out of Persia by Muslims. A Christian community still lived in Malabar, and in the 10th century the king of the Cheras granted land to Joseph Rabban for a Jewish community in India.
Pallava king Mahendra-varman I, who ruled for thirty years at the beginning of the 7th century lost northern territory to the Chalukyas. As a Jain he had persecuted other religions, but after he tested and was converted by the Shaivite mystic Appar, he destroyed the Jain monastery at Pataliputra. His son Narasimha-varman I defeated Pulakeshin II in three battles, capturing the Chalukya capital at Vatapi in 642 with the aid of the Sri Lanka king. He ruled for 38 years, and his capital at Kanchi contained more than a hundred Buddhist monasteries housing over 10,000 monks, and there were many Jain temples too. During the reign (c. 670-95) of Pallava king Parameshvara-varman I the Chalukyas probably captured Kanchi, as they did again about 740.
On the island of Sri Lanka the 58th and last king listed in the Mahavamsa was Mahasena (r. 274-301). He oversaw the building of sixteen tanks and irrigation canals. The first of 125 kings listed up to 1815 in the Culavamsa, Srimeghavanna, repaired the monasteries destroyed by Mahasena. Mahanama (r. 406-28) married the queen after she murdered his brother Upatissa. Mahanama was the last king of the Lambakanna dynasty that had lasted nearly four centuries. His death was followed by an invasion from southern India that limited Sinhalese rule to the Rohana region.
Buddhaghosha was converted to Buddhism and went to Sri Lanka during the reign of Mahanama. There he translated and wrote commentaries on numerous Buddhist texts. His Visuddhimagga explains ways to attain purity by presenting the teachings of the Buddha in three parts on conduct, concentration, and wisdom. Buddhaghosha also collected parables and stories illustrating Buddhist ethics by showing how karma brings the consequences of actions back to one, sometimes in another life. One story showed how a grudge can cause alternating injuries between two individuals from life to life. Yet if no grudge is held, the enmity subsides. In addition to the usual vices of killing, stealing, adultery, and a judge taking bribes, occupations that could lead to hell include making weapons, selling poison, being a general, collecting taxes, living off tolls, hunting, fishing, and even gathering honey. The Buddhist path is encouraged with tales of miracles and by showing the benefits of good conduct and meditation.
The Moriya clan chief Dhatusena (r. 455-73) improved irrigation by having a bridge constructed across the Mahavali River. He led the struggle to expel the foreigners from the island and restored Sinhalese authority at Anuradhapura. His eldest son Kassapa (r. 473-91) took him prisoner and usurped the throne but lost it with his life to his brother Moggallana (r. 491-508), who used an army of mercenaries from south India. He had the coast guarded to prevent foreign attacks and gave his umbrella to the Buddhist community as a token of submission. His son Kumara-Dhatusena (r. 508-16) was succeeded by his son Kittisena, who was quickly deposed by the usurping uncle Siva. He was soon killed by Upatissa II (r. 517-18), who revived the Lambakanna dynasty and was succeeded by his son Silakala (r. 518-31). Moggallana II (r. 531-51) had to fight for the throne; but he was a poet and was considered a pious ruler loved by the people. Two rulers were killed as the Moriyas regained power. The second, Mahanaga (r. 569-71), had been a rebel at Rohana and then its governor before becoming king at Anuradhapura. Aggabodhi I (r. 571-604) and Aggabodhi II (r. 604-14) built monasteries and dug water tanks for irrigation. A revolt by the general Moggallana III (r. 614-19) overthrew the last Moriya king and led to a series of civil wars and succession battles suffered by the Sri Lanka people until Manavamma (r. 684-718) re-established the Lambakanna dynasty.
Included in a didactic Tamil collection of "Eighteen Minor Poems" are the Naladiyar and the famous Kural. The Naladiyar consists of 400 quatrains of moral aphorisms. In the 67th quatrain the wise say it is not cowardice to refuse a challenge when men rise in enmity and wish to fight; even when enemies do the worst, it is right not to do evil in return. Like milk the path of virtue is one, though many sects teach it. (118) The treasure of learning needs no safeguard, for fire cannot destroy it nor can kings take it. Other things are not true wealth, but learning is the best legacy to leave one's children. (134) Humility is greatness, and self-control is what the gainer actually gains. Only the rich who relieve the need of their neighbors are truly wealthy. (170) The good remember another's kindness, but the base only recall fancied slights. (356)
The Tamil classic, The Kural by Tiru Valluvar, was probably written about 600 CE, plus or minus two centuries. This book contains 133 chapters of ten pithy couplets each and is divided into three parts on the traditional Hindu goals of dharma (virtue or justice), artha (success or wealth), and kama (love or pleasure). The first two parts contain moral proverbs; the third is mostly expressions of love, though there is the statement that one-sided love is bitter while balanced love is sweet. Valluvar transcends the caste system by suggesting that we call Brahmins those who are virtuous and kind to all that live.
Here are a few of Valluvar's astute observations on dharma. Bliss hereafter is the fruit of a loving life here. (75) Sweet words with a smiling face are more pleasing than a gracious gift. (92) He asked, "How can one pleased with sweet words oneself use harsh words to others?"2 Self-control takes one to the gods, but its lack to utter darkness. (121) Always forgive transgressions, but better still forget them. (152) The height of wisdom is not to return ill for ill. (203) "The only gift is giving to the poor; all else is exchange." (221) If people refrain from eating meat, there will be no one to sell it. (256) "To bear your pain and not pain others is penance summed up." (261) In all the gospels he found nothing higher than the truth. (300) I think the whole chapter on not hurting others is worth quoting.
The pure in heart will never hurt others even for wealth or renown.
The code of the pure in heart is not to return hurt for angry hurt.
Vengeance even against a wanton insult does endless damage.
Punish an evil-doer by shaming him with a good deed, and forget.
What good is that sense which does not feel and prevent
all creatures' woes as its own?
Do not do to others what you know has hurt yourself.
It is best to refrain from willfully hurting anyone, anytime, anyway.
Why does one hurt others knowing what it is to be hurt?
The hurt you cause in the forenoon self-propelled
will overtake you in the afternoon.
Hurt comes to the hurtful; hence it is
that those don't hurt who do not want to be hurt.3
Valluvar went even farther when he wrote, "Even at the cost of one's own life one should avoid killing." (327) For death is but a sleep, and birth an awakening. (339)
In the part on artha (wealth) Valluvar defined the unfailing marks of a king as courage, liberality, wisdom and energy. (382) The just protector he deemed the Lord's deputy, and the best kings have grace, bounty, justice, and concern. "The wealth which never declines is not riches but learning." (400) "The wealth of the ignorant does more harm than the want of the learned." (408) The truly noble are free of arrogance, wrath, and pettiness. (431) "A tyrant indulging in terrorism will perish quickly." (563) "Friendship curbs wrong, guides right, and shares distress." (787) "The soul of friendship is freedom, which the wise should welcome." (802) "The world is secure under one whose nature can make friends of foes." (874) Valluvar believed it was base to be discourteous even to enemies (998), and his chapter on character is also worth quoting.
All virtues are said to be natural to those who acquire character as a duty.
To the wise the only worth is character, naught else.
The pillars of excellence are five-love, modesty,
altruism, compassion, truthfulness.
The core of penance is not killing, of goodness not speaking slander.
The secret of success is humility;
it is also wisdom's weapon against foes.
The touchstone of goodness is to own one's defeat even to inferiors.
What good is that good which does not return good for evil?
Poverty is no disgrace to one with strength of character.
Seas may whelm, but men of character will stand like the shore.
If the great fail in nobility, the earth will bear us no more.4
Kamandaka's Nitisara in the first half of the 8th century was primarily based on Kautilya's Arthashastra and was influenced by the violence in the Mahabharata, as he justified both open fighting when the king is powerful and treacherous fighting when he is at a disadvantage. Katyayana, like Kamandaka, accepted the tradition of the king's divinity, although he argued that this should make ruling justly a duty. Katyayana followed Narada's four modes of judicial decisions as the dharma of moral law when the defendant confesses, judicial proof when the judge decides, popular custom when tradition rules, and royal edict when the king decides. Crimes of violence were distinguished from the deception of theft. Laws prevented the accumulated interest on debts from exceeding the principal. Brahmins were still exempt from capital punishment and confiscation of property, and most laws differed according to one's caste. The Yoga-vasishtha philosophy taught that as a bird flies with two wings, the highest reality is attained through knowledge and work.
The famous Vedanta philosopher Shankara was born into a Brahmin family; his traditional dates are 788-820, though some scholars believe he lived about 700-50. It was said that when he was eight, he became an ascetic and studied with Govinda, a disciple of the monist Gaudapala; at 16 he was teaching many in the Varanasi area. Shankara wrote a long commentary on the primary Vedanta text, the Brahma Sutra, on the Bhagavad-Gita, and on ten of the Upanishads, always emphasizing the non-dual reality of Brahman (God), that the world is false, and that the atman (self or soul) is not different from Brahman.
Shankara traveled around India and to Kashmir, defeating opponents in debate; he criticized human sacrifice to the god Bhairava and branding the body. He performed a funeral for his mother even though it was considered improper for a sannyasin (renunciate). Shankara challenged the Mimamsa philosopher Mandana Mishra, who emphasized the duty of Vedic rituals, by arguing that knowledge of God is the only means to final release, and after seven days he was declared the winner by Mandana's wife. He tended to avoid the cities and taught sannyasins and intellectuals in the villages. Shankara founded monasteries in the south at Shringeri of Mysore, in the east at Puri, in the west at Dvaraka, and in the northern Himalayas at Badarinath. He wrote hymns glorifying Shiva as God, and Hindus would later believe he was an incarnation of Shiva. He criticized the corrupt left-hand (sexual) practices used in Tantra. His philosophy spread, and he became perhaps the most influential of all Hindu philosophers.
In the Crest-Jewel of Wisdom Shankara taught that although action is for removing bonds of conditioned existence and purifying the heart, reality can only be attained by right knowledge. Realizing that an object perceived is a rope removes the fear and sorrow from the illusion it is a snake. Knowledge comes from perception, investigation, or instruction, not from bathing, giving alms, or breath control. Shankara taught enduring all pain and sorrow without thought of retaliation, dejection, or lamentation. He noted that the scriptures gave the causes of liberation as faith, devotion, concentration, and union (yoga); but he taught, "Liberation cannot be achieved except by direct perception of the identity of the individual with the universal self."5 Desires lead to death, but one who is free of desires is fit for liberation. Shankara distinguished the atman as the real self or soul from the ahamkara (ego), which is the cause of change, experiences karma (action), and destroys the rest in the real self. From neglecting the real self spring delusion, ego, bondage, and pain. The soul is everlasting and full of wisdom. Ultimately both bondage and liberation are illusions that do not exist in the soul.
Indian drama was analyzed by Bharata in the Natya Shastra, probably from the third century CE or before. Bharata ascribed a divine origin to drama and considered it a fifth Veda; its origin seems to be from religious dancing. In the classical plays Sanskrit is spoken by the Brahmins and noble characters, while Prakrit vernaculars are used by others and most women. According to Bharata poetry (kavya), dance (nritta), and mime (nritya) in life's play (lila) produce emotion (bhava), but only drama (natya) produces "flavor" (rasa). The drama uses the eight basic emotions of love, joy (humor), anger, sadness, pride, fear, aversion, and wonder, attempting to resolve them in the ninth holistic feeling of peace. These are modified by 33 less stable sentiments he listed as discouragement, weakness, apprehension, weariness, contentment, stupor, elation, depression, cruelty, anxiety, fright, envy, arrogance, indignation, recollection, death, intoxication, dreaming, sleeping, awakening, shame, demonic possession, distraction, assurance, indolence, agitation, deliberation, dissimulation, sickness, insanity, despair, impatience, and inconstancy. The emotions are manifested by causes, effects, and moods. The spectators should be of good character, intelligent, and empathetic.
Although some scholars date him earlier, the plays of Bhasa can probably be placed after Ashvaghosha in the second or third century CE. In 1912 thirteen Trivandrum plays were discovered that scholars have attributed to Bhasa. Five one-act plays were adapted from situations in the epic Mahabharata. Dutavakya has Krishna as a peace envoy from the Pandavas giving advice to Duryodhana. In Karnabhara the warrior Karna sacrifices his armor by giving it to Indra, who is in the guise of a Brahmin. Dutaghatotkacha shows the envoy Ghatotkacha carrying Krishna's message to the Kauruvas. Urubhanga depicts Duryodhana as a hero treacherously attacked below the waist by Bhima at the signal of Krishna. In Madhyama-vyayoga the middle son is going to be sacrificed, but it turns out to be a device used by Bhima's wife Hidimba to get him to visit her. Each of these plays seems to portray didactically heroic virtues for an aristocratic audience. The Mahabharata also furnishes the episode for the Kauravas' cattle raid of Virata in the Pancharatra, which seems to have been staged to glorify some sacrifice. Bhasa's Abhisheka follows the Ramayana closely in the coronation of Rama, and Pratima also reworks the Rama story prior to the war. Balacharita portrays heroic episodes in the childhood of Krishna.
In Bhasa's Avimaraka the title character heroically saves princess Kurangi from a rampaging elephant, but he says he is an outcast. Dressed as a thief, Avimaraka sneaks into the palace to meet the princess, saying,
Once we have done what we can even failure is no disgrace.
Has anyone ever succeeded by saying, "I can't do it"?
A person becomes great by attempting great things.6
He spends a year there with Kurangi before he is discovered and must leave. Avimaraka is about to jump off a mountain when a fairy (Vidyadhara) gives him a ring by which he can become invisible. Using invisibility, he and his jester go back into the palace just in time to catch Kurangi before she hangs herself. The true parentage of the royal couple is revealed by the sage Narada, and Vairantya king Kuntibhoja gives his new son-in-law the following advice:
With tolerance be king over Brahmins.
With compassion win the hearts of your subjects.
With courage conquer earth's rulers.
With knowledge of the truth conquer yourself.7
Bhasa uses the story of legendary King Udayana in two plays. In Pratijna Yaugandharayana the Vatsa king at Kaushambi, Udayana, is captured by Avanti king Pradyota so that Udayana can be introduced to the princess Vasavadatta by tutoring her in music, a device which works as they fall in love. The title comes from the vow of chief minister Yaugandharayana to free his sovereign Udayana; he succeeds in rescuing him and his new queen Vasavadatta. In Bhasa's greatest play, The Dream of Vasavadatta, the same minister, knowing his king's reluctance to enter a needed political marriage, pretends that he and queen Vasavadatta are killed in a fire so that King Udayana will marry Magadha princess Padmavati. Saying Vasavadatta is his sister, Yaugandharayana entrusts her into the care of Padmavati, because of the prophecy she will become Udayana's queen. The play is very tender, and both princesses are noble and considerate of each other; it also includes an early example of a court jester. Udayana is still in love with Vasavadatta, and while resting half asleep, Vasavadatta, thinking she is comforting Padmavati's headache, gently touches him. The loving and grieving couple are reunited; Padmavati is also accepted as another wife; and the kingdom of Kaushambi is defended by the marriage alliance.
Bhasa's Charudatta is about the courtesan Vasantasena, who initiates a love affair with an impoverished merchant, but the manuscript is cut off abruptly after four acts. However, this story was adapted and completed in The Little Clay Cart, attributed to a King Sudraka, whose name means a little servant. In ten acts this play is a rare example of what Bharata called a maha-nataka or "great play." The play is revolutionary not only because the romantic hero and heroine are a married merchant and a courtesan, but because the king's brother-in-law, Sansthanaka, is portrayed as a vicious fool, and because by the end of the play the king is overthrown and replaced by a man he had falsely imprisoned. Vasantasena rejects the attentions of the insulting Sansthanaka, saying that true love is won by virtue not violence; she is in love with Charudatta, who is poor because he is honest and generous, as money and virtue seldom keep company these days. Vasantasena kindly pays the gambling debts of his shampooer, who then becomes a Buddhist monk. Charudatta, not wearing jewels any more, gives his cloak to a man who saved the monk from a rampaging elephant.
Vasantasena entrusts a golden casket of jewelry to Charudatta, but Sharvilaka, breaking into his house to steal, is given it so that he can gain the courtesan girl Madanika. So that he won't get a bad reputation, Charudatta's wife gives a valuable pearl necklace to her husband, and he realizes he is not poor because he has a wife whose love outlasts his wealthy days. Madanika is concerned that Sharvilaka did something bad for her sake and tells him to restore the jewels, and he returns them to Vasantasena on the merchant's behalf, while she generously frees her servant Madanika for him.
Charudatta gives Vasantasena the more valuable pearl necklace, saying he gambled away her jewels. As the romantic rainy season approaches, the two lovers are naturally drawn together. Charudatta's child complains that he has to play with a little clay cart as a toy, and Vasantasena promises him a golden one. She gets into the wrong bullock cart and is taken to the garden of Sansthanaka, where he strangles her for rejecting his proposition. Then he accuses Charudatta of the crime, and because of his royal influence in the trial, Charudatta is condemned to be executed after his friend shows up with Vasantasena's jewels. However, the monk has revived Vasantasena, and just before Charudatta's head is to be cut off, she appears to save him. Sharvilaka has killed the bad king and anointed a good one. Charudatta lets the repentant Sansthanaka go free, and the king declares Vasantasena a wedded wife and thus no longer a courtesan.
Although he is considered India's greatest poet, it is not known when Kalidasa lived. Probably the best educated guess has him flourishing about 400 CE during the reign of Chandragupta II. The prolog of his play Malavika and Agnimitra asks the audience to consider a new poet and not just the celebrated Bhasa and two others. In this romance King Agnimitra, who already has two queens, in springtime falls in love with the dancing servant Malavika, who turns out to be a princess when his foreign conflicts are solved. The king is accompanied throughout by a court jester, who with a contrivance frees Malavika from confinement by the jealous queen. The only female who speaks Sanskrit in Kalidasa's plays is the Buddhist nun, who judges the dance contest and explains that Malavika had to be a servant for a year in order to fulfill a prophecy that she would marry a king after doing so. In celebration of the victory and his latest marriage, the king orders all prisoners released.
In Kalidasa's Urvashi Won by Valor, King Pururavas falls in love with the heavenly nymph Urvashi. The king's jester Manavaka reveals this secret to the queen's maid Nipunika. Urvashi comes down to earth with her friend and writes a love poem on a birch-leaf. The queen sees this also but forgives her husband's guilt. Urvashi returns to paradise to appear in a play; but accidentally revealing her love for Pururavas, she is expelled to earth and must stay until she sees the king's heir. The queen generously offers to accept a new queen who truly loves the king, and Urvashi makes herself visible to Pururavas. In the fourth act a moment of jealousy causes Urvashi to be changed into a vine, and the king in searching for her dances and sings, amorously befriending animals and plants until a ruby of reunion helps him find the vine; as he embraces the vine, it turns into Urvashi. After many years have passed, their son Ayus gains back the ruby that was stolen by a vulture. When Urvashi sees the grown-up child she had sent away so that she could stay with the king, she must return to paradise; but the king gives up his kingdom to their son so that he can go with her, although a heavenly messenger indicates that he can remain as king with Urvashi until his death.
The most widely acclaimed Indian drama is Kalidasa's Shakuntala and the Love Token. While hunting, King Dushyanta is asked by the local ascetics not to kill deer, saying, "Your weapon is meant to help the weak not smite the innocent."8 The king and Shakuntala, who is the daughter of a nymph and is being raised by ascetics, fall in love with each other. The king is accompanied by a foolish Brahmin who offers comic relief. Although he has other wives, the king declares that he needs only the earth and Shakuntala to sustain his line. They are married in the forest, and Shakuntala becomes pregnant. Kanva, who raised her, advises the bride to obey her elders, treat her fellow wives as friends, and not cross her husband in anger even if he mistreats her. The king returns to his capital and gives his ring to Shakuntala so that he will recognize her when she arrives later. However, because of a curse on her from Durvasas, he loses his memory of her, and she loses the ring. Later the king refuses to accept this pregnant woman he cannot recall, and in shame she disappears. A fisherman finds the ring in a fish; when the king gets it back, his memory of Shakuntala returns. The king searches for her and finds their son on Golden Peak with the birthmarks of a universal emperor; now he must ask to be recognized by her. They are happily reunited, and their child Bharata is to become the founding emperor of India.
An outstanding political play was written by Vishakhadatta, who may also have lived at the court of Chandragupta II or as late as the 9th century. Rakshasa's Ring is set when Chandragupta, who defeated Alexander's successor Seleucus in 305 BC, is becoming Maurya emperor by overcoming the Nandas. According to tradition he was politically assisted by his minister Chanakya, also known as Kautilya, supposed author of the famous treatise on politics, Artha Shastra. Rakshasa, whose name means demon, had sent a woman to poison Chandragupta, but Chanakya had her poison King Parvataka instead. Rakshasa supports Parvataka's son Malayaketu; Chanakya cleverly assuages public opinion by letting Parvataka's brother have half the kingdom but arranges for his death too. Chanakya even pretends to break with Chandragupta to further his plot.
Chanakya is able to use a Jain monk and a secretary by pretending to punish them and have Siddarthaka rescue the secretary; with a letter he composed written by the secretary and with Rakshasa's ring taken from the home of a jeweler who gave Rakshasa and his family refuge, they pretend to serve Malayaketu but make him suspect Rakshasa's loyalty and execute the allied princes that Rakshasa had gained for him. Ironically Rakshasa's greatest quality is loyalty, and after he realizes he has been trapped, he decides to sacrifice himself to save the jeweler from being executed. By then Malayaketu's attack on Chandragupta's capital has collapsed from lack of support, and he is captured. Chanakya's manipulations have defeated Chandragupta's rivals without a fight, and he appoints chief minister in his place Rakshasa, who then spares the life of Malayaketu. Chanakya (Kautilya) announces that the emperor (Chandragupta) grants Malayaketu his ancestral territories and releases all prisoners except draft animals.
Ratnavali was attributed to Harsha, who ruled at Kanauj in the first half of the 7th century. This comedy reworks the story of King Udayana, who though happily married to Vasavadatta, is seduced into marrying her Simhalese cousin Ratnavali for the political motivations contrived by his minister Yaugandharayana. Ratnavali, using the name Sagarika as the queen's maid, falls in love with the king and has painted his portrait. Her friend then paints her portrait with the king's, which enamors him after he hears the story of the painting from a mynah bird that repeats the maidens' conversation. Queen Vasavadatta becomes suspicious, and the jester is going to bring Sagarika dressed like the queen, who learning of it appears veiled herself to expose the affair. Sagarika tries to hang herself but is saved by the king. The jealous queen puts Sagarika in chains and the noose around the jester's neck. Yet in the last act a magician contrives a fire, and the king saves Sagarika once again. A necklace reveals that she is a princess, and the minister Yaugandharayana explains how he brought the lovers together.
Also attributed to Harsha: Priyadarshika is another harem comedy; but Joy of the Serpents (Nagananda) shows how prince Jimutavahana gives up his own body to stop a sacrifice of serpents to the divine Garuda. A royal contemporary of Harsha, Pallava king Mahendravikarmavarman wrote a one-act farce called "The Sport of Drunkards" (Mattavilasa) in which an inebriated Shaivite ascetic accuses a Buddhist monk of stealing his begging bowl made from a skull; but after much satire it is found to have been taken by a dog.
Bhavabhuti lived in the early 8th century and was said to have been the court poet in Kanauj of Yashovarman, a king also supposed to have written a play about Rama. Bhavabhuti depicted the early career of Rama in Mahavira-charita and then produced The Later Story of Rama. In this latter play Rama's brother Lakshmana shows Rama and Sita murals of their past, and Rama asks Sita for forgiveness for having put her through a trial by fire to show the people her purity after she had been captured by the evil Ravana. Rama has made a vow to serve the people's good above all and so orders Sita into exile because of their continuing suspicions. Instead of killing the demon Sambuka, his penance moves Rama to free him. Sita has given birth to two sons, Lava and Kusha, and twelve years pass. When he heard about his daughter Sita's exile, Janaka gave up meat and became a vegetarian; when Janaka meets Rama's mother Kaushalya, she faints at the memory. Rama's divine weapons have been passed on to his sons, and Lava is able to pacify Chandraketu's soldiers by meditating. Rama has Lava remove the spell, and Kusha recites the Ramayana taught him by Valmiki, who raised the sons. Finally Sita is joyfully reunited with Rama and their sons.
Malati and Madhava by Bhavabhuti takes place in the city of Padmavati. Although the king has arranged for Nandana to marry his minister's daughter Malati, the Buddhist nun Kamandaki manages eventually to bring together the suffering lovers Madhava and Malati. Malati has been watching Madhava and draws his portrait; when he sees it, he draws her too. Through the rest of the play they pine in love for each other. Malati calls her father greedy for going along with the king's plan to marry her to Nandana, since a father deferring to a king in this is not sanctioned by morality nor by custom. Madhava notes that success comes from education with innate understanding, boldness combined with practiced eloquence, and tact with quick wit. Malati's friend Madayantika is attacked by a tiger, and Madhava's friend Makaranda is wounded saving her life. In their amorous desperation Madhava sells his flesh to the gods, and he saves the suicidal Malati from being sacrificed by killing Aghoraghanta, whose pupil Kapalakundala then causes him much suffering. Finally Madhava and Malati are able to marry, as Makaranda marries Madayantika. These plays make clear that courtly love and romance were thriving in India for centuries before they were rediscovered in Europe.
The Rashtrakuta Dantidurga married a Chalukya princess and became a vassal king about 733; he and Gujarat's Pulakeshin helped Chalukya emperor Vikramaditya II repulse an Arab invasion, and Dantidurga's army joined the emperor in a victorious expedition against Kanchi and the Pallavas. After Vikramaditya II died in 747, Dantidurga conquered Gurjara, Malwa, and Madhya Pradesh. This Rashtrakuta king then confronted and defeated Chalukya emperor Kirtivarman II so that by the end of 753 he controlled all of Maharashtra. The next Rashtrakuta ruler Krishna I completed the demise of the Chalukya empire and was succeeded about 773 by his eldest son Govinda II. Absorbed in personal pleasures, he left the administration to his brother Dhruva, who eventually revolted and usurped the throne, defeating the Ganga, Pallava, and Vengi kings who had opposed him.
The Pratihara ruler of Gurjara, Vatsaraja, took over Kanauj and installed Indrayudha as governor there. The Palas rose to power by unifying Bengal under the elected king Gopala about 750. He patronized Buddhism, and his successor Dharmapala had fifty monasteries built, founding the Vikramashila monastery with 108 monks in charge of various programs. During the reign of Dharmapala the Jain scholar Haribhadra recommended respecting various views because of Jainism's principles of nonviolence and many-sidedness. Haribhadra found that the following eight qualities can be applied to the faithful of any tradition: nonviolence, truth, honesty, chastity, detachment, reverence for a teacher, fasting, and knowledge. Dharmapala marched into the Doab to challenge the Pratiharas but was defeated by Vatsaraja. When these two adversaries were about to meet for a second battle in the Doab, the Rashtrakuta ruler Dhruva from the Deccan defeated Vatsaraja first and then Dharmapala but did not occupy Kanauj.
Dhruva returned to the south with booty and was succeeded by his third son Govinda III in 793. Govinda had to defeat his brother Stambha and a rebellion of twelve kings, but the two brothers reconciled and turned on Ganga prince Shivamira, whom they returned to prison. Supreme over the Deccan, Govinda III left his brother Indra as viceroy of Gujarat and Malava and marched his army north toward Kanauj, which Vatsaraja's successor Nagabhata II had occupied while Dharmapala's nominee Chakrayudha was on that throne. Govinda's army defeated Nagabhata's; Chakrayudha surrendered, and Dharmapala submitted. Govinda III marched all the way to the Himalayas, uprooting and reinstating local kings.
Rashtrakuta supremacy was challenged by Vijayaditya II, who had become king of Vengi in 799; but Govinda defeated him and installed his brother Bhima-Salukki on the Vengi throne about 802. Then Govinda's forces scattered a confederacy of Pallava, Pandya, Kerala, and Ganga rulers and occupied Kanchi, threatening the king of Sri Lanka, who sent him two statues. After Govinda III died in 814, Chalukya Vijayaditya II overthrew Bhima-Salukki to regain his Vengi throne; then his army invaded Rashtrakuta territory, plundering and devastating the city of Stambha. Vijayaditya ruled for nearly half a century and was said to have fought 108 battles in a 12-year war with the Rashtrakutas and the Gangas. His grandson Vijayaditya III ruled Vengi for 44 years (848-92); he also invaded the Rashtrakuta empire in the north, burning Achalapura, and it was reported he took gold by force from the Ganga king of Kalinga. His successor Chalukya-Bhima I was king of Vengi for 30 years and was said to have turned his attention to helping ascetics and those in distress. Struggles with his neighbors continued though, and Chalukya-Bhima was even captured for a time.
Dharmapala's son Devapala also supported Buddhism and extended the Pala empire in the first half of the 9th century by defeating the Utkalas, Assam, Huns, Dravidas, and Gurjaras, while maintaining his domain against three generations of Pratihara rulers. His successor Vigrahapala retired to an ascetic life after ruling only three years, and his son Narayanapala was also of a peaceful and religious disposition, allowing the Pala empire to languish. After the Pala empire was defeated by the Rashtrakutas and Pratiharas, subordinate chiefs became independent; Assam king Harjara even claimed an imperial title. Just before his long reign ended in 908 Narayanapala did reclaim some territories after the Rashtrakuta invasion of the Pratihara dominions; but in the 10th century during the reign of the next three kings the Pala kingdom declined as principalities asserted their independence in conflicts with each other.
Chandella king Yashovarman invaded the Palas and the Kambojas, and he claimed to have conquered Gauda and Mithila. His successor Dhanga ruled through the second half of the 10th century and was the first independent Chandella king, calling himself the lord of Kalanjara. In the late 8th century Arab military expeditions had attempted to make Kabul pay tribute to the Muslim caliph. In 870 Kabul and Zabul were conquered by Ya'qub ibn Layth; the king of Zubalistan was killed, and the people accepted Islam. Ghazni sultan Sabutkin (r. 977-97) invaded India with a Muslim army and defeated Dhanga and a confederacy of Hindu chiefs about 989.
South of the Chandellas the Kalachuris led by Kokkalla in the second half of the 9th century battled the Pratiharas under Bhoja, Turushkas (Muslims), Vanga in east Bengal, Rashtrakuta king Krishna II, and Konkan. His successor Shankaragana fought Kosala, but he and Krishna II had to retreat from the Eastern Chalukyas. In the next century Kalachuri king Yuvaraja I celebrated his victory over Vallabha with a performance of Rajshekhara's drama Viddhashalabhanjika. Yuvaraja's son Lakshmanaraja raided east Bengal, defeated Kosala, and invaded the west. Like his father, he patronized Shaivite teachers and monasteries. Near the end of the 10th century Kalachuri king Yuvaraja II suffered attacks from Chalukya ruler Taila II and Paramara king Munja. After many conquests, the aggressive Munja, disregarding the advice of his counselor Rudraditya, was defeated and captured by Taila and executed after an attempted rescue.
In 814 Govinda III was succeeded as Rashtrakuta ruler by his son Amoghavarsha, only about 13 years old; Gujarat viceroy Karkka acted as regent. Three years later a revolt led by Vijayaditya II, who had regained the Vengi throne, temporarily overthrew Rashtrakuta power until Karakka reinstated Amoghavarsha I by 821. A decade later the Rashtrakuta army defeated Vijayaditya II and occupied Vengi for about a dozen years. Karkka was made viceroy in Gujarat, but his son Dhruva I rebelled and was killed about 845. The Rashtrakutas also fought the Gangas for about twenty years until Amoghavarsha's daughter married a Ganga prince about 860. In addition to his military activities Amoghavarsha sponsored several famous Hindu and Jain writers and wrote a book himself on Jain ethics. Jain kings and soldiers made an exception to the prohibition against killing for the duties of hanging murderers and slaying enemies in battle. He died in 878 and was succeeded by his son Krishna II, who married the daughter of Chedi ruler Kokkalla I to gain an ally for his many wars with the Pratiharas, Eastern Chalukyas, Vengi, and the Cholas.
Krishna II died in 914 and was succeeded by his grandson Indra III, who marched his army north and captured northern India's imperial city Kanauj. However, Chandella king Harsha helped the Pratihara Mahipala regain his throne at Kanauj. Indra III died in 922; but his religious son Amoghavarsha II had to get help from his Chedi relations to defeat his brother Govinda IV, who had usurped the throne for fourteen years. Three years later in 939 Krishna III succeeded as Rashtrakuta emperor and organized an invasion of Chola and twenty years later another expedition to the north. The Rashtrakutas reigned over a vast empire when he died in 967; but with no living issue the struggle for the throne, despite the efforts of Ganga king Marasimha III, resulted in the triumph of Chalukya king Taila II in 974. That year Marasimha starved himself to death in the Jain manner and was succeeded by Rajamalla IV, whose minister Chamunda Raya staved off usurpation. His Chamunda Raya Purana includes an account of the 24 Jain prophets.
In the north in the middle of the 9th century the Pratiharas were attacked by Pala emperor Devapala; but Pratihara king Bhoja and his allies defeated Pala king Narayanapala. Bhoja won and lost battles against Rashtrakuta king Krishna II. The Pratiharas were described in 851 by an Arab as having the finest cavalry and as the greatest foe of the Muslims, though no country in India was safer from robbers. Bhoja ruled nearly a half century, and his successor Mahendrapala I expanded the Pratihara empire to the east. When Mahipala was ruling in 915 Al Mas'udi from Baghdad observed that the Pratiharas were at war with the Muslims in the west and the Rashtrakutas in the south, and he claimed they had four armies of about 800,000 men each. When Indra III sacked Kanauj, Mahipala fled but returned after the Rashtrakutas left. In the mid-10th century the Pratiharas had several kings, as the empire disintegrated and was reduced to territory around Kanauj.
A history of Kashmir's kings called the Rajatarangini was written by Kalhana in the 12th century. Vajraditya became king of Kashmir about 762 and was accused of selling men to the Mlechchhas (probably Arabs). Jayapida ruled Kashmir during the last thirty years of the 8th century, fighting wars of conquest even though his army once deserted his camp and people complained of high taxes. Family intrigue and factional violence led to a series of puppet kings until Avanti-varman began the Utpala dynasty of Kashmir in 855. His minister Suvya's engineering projects greatly increased the grain yield and lowered its prices. Avanti-varman's death in 883 was followed by a civil war won by Shankara-varman, who then invaded Darvabhisara, Gurjara, and Udabhanda; but he was killed by people in Urasha, who resented his army being quartered there. More family intrigues, bribery, and struggles for power between the Tantrin infantry, Ekanga military police, and the Damara feudal landowners caused a series of short reigns until the minister Kamalavardhana took control and asked the assembly to appoint a king; they chose the Brahmin Yashakara in 939.
Yashakara was persuaded to resign by his minister Parvagupta, who killed the new Kashmir king but died two years later in 950. Parvagupta's son Kshemagupta became king and married the Lohara princess Didda. Eight years later she became regent for their son Abhimanyu and won over the rebel Yashodhara by appointing him commander of her army. When King Abhimanyu died in 972, his three sons ruled in succession until each in turn was murdered by their grandmother, Queen Didda; she ruled Kashmir herself with the help of an unpopular prime minister from 980 until she died in 1003.
In the south the Pandyas had risen to power in the late 8th century under King Nedunjadaiyan. He ruled for fifty years, and his son Srimara Srivallabha reigned nearly as long, winning victories over the Gangas, Pallavas, Cholas, Kalingas, Magadhas, and others until he was defeated by Pallava Nandi-varman III at Tellaru. The Pandya empire was ruined when his successor Varaguna II was badly beaten about 880 by a combined force of Pallavas, western Gangas, and Cholas. The Chola dynasty of Tanjore was founded by Vijayalaya in the middle of the 9th century. As a vassal of the Pallavas, he and his son Aditya I helped their sovereign defeat the Pandyas. Aditya ruled 36 years and was succeeded as Chola king by his son Parantaka I (r. 907-953). His military campaigns established the Chola empire with the help of his allies, the Gangas, Kerala, and the Kodumbalur chiefs. The Pandyas and the Sinhalese king of Sri Lanka were defeated by the Cholas about 915. Parantaka demolished remaining Pallava power, but in 949 the Cholas were decisively beaten by Rashtrakuta king Krishna III at Takkolam, resulting in the loss of Tondamandalam and the Pandya country. Chola power was firmly established during the reign (985-1014) of Rajaraja I, who attacked the Kerala, Sri Lanka, and the Pandyas to break up their control of the western trade.
When the Pandyas invaded the island, Sri Lanka king Sena I (r. 833-53) fled as the royal treasury was plundered. His successor Sena II (r. 853-87) sent a Sinhalese army in retaliation, besieging Madura, defeating the Pandyas, and killing their king. The Pandya capital was plundered, and the golden images were taken back to the island. In 915 a Sinhalese army from Sri Lanka supported Pandyan ruler Rajasimha II against the Cholas; but the Chola army invaded Sri Lanka and apparently stayed until the Rashtrakutas invaded their country in 949. Sri Lanka king Mahinda IV (r. 956-72) had some of the monasteries burnt by the Cholas restored. Sena V (r. 972-82) became king at the age of twelve but died of alcoholism. During his reign a rebellion supported by Damila forces ravaged the island. By the time of Mahinda V (r. 982-1029) the monasteries owned extensive land, and barons kept the taxes from their lands. As unpaid mercenaries revolted and pillaged, Mahinda fled to Rohana. Chola king Rajaraja sent a force that sacked Anuradhapura, ending its period as the capital in 993 as the northern plains became a Chola province. In 1017 the Cholas conquered the south as well and took Mahinda to India as a prisoner for the rest of his life.
In India during this period Hindu colleges (ghatikas) were associated with the temples, and gradually the social power of the Brahmins superseded Buddhists and Jains, though the latter survived in the west. Jain gurus, owning nothing and wanting nothing, were often able to persuade the wealthy to contribute the four gifts of education, food, medicine, and shelter. In the devotional worship of Vishnu and Shiva and their avatars (incarnations), the Buddha became just another avatar for Hindus. Amid the increasing wars and militarism the ethical value of ahimsa (non-injury) so important to the Jains and Buddhists receded. The examples of the destroyer Shiva or Vishnu's incarnations as Rama and Krishna hardly promoted nonviolence. Village assemblies tended to have more autonomy in south India. The ur was open to all adult males in the village, but the sabha was chosen by lot from those qualified by land ownership, aged 35-70, knowing mantras and Brahmanas, and free of any major crime or sin. Land was worked by tenant peasants, who usually had to pay from one-sixth to one-third of their produce. Vegetarian diet was customary, and meat was expensive.
Women did not have political rights and usually worked in the home or in the fields, though upper caste women and courtesans could defy social conventions. Women attendants in the temples could become dancers, but some were exploited as prostitutes by temple authorities. Temple sculptures as well as literature were often quite erotic, as the loves of Krishna and the prowess of the Shiva lingam were celebrated, and the puritanical ethics of Buddhism and Jainism became less influential.
Feminine creative energy was worshiped as shakti, and Tantra in Hinduism and Tibetan Buddhism celebrated the union of the sexual act as a symbol of divine union; their rituals might culminate in partaking of the five Ms - madya (wine), matsya (fish), mamsa (flesh), mudra (grain), and maithuna (coitus). Although in the early stages of spiritual development Tantra taught the usual moral avoidance of cruelty, alcohol, and sexual intercourse, in the fifth stage after training by the guru secret rites at night might defy such social taboos. Ultimately the aspirant is not afraid to practice openly what others disapprove in pursuing what he thinks is true, transcending the likes and dislikes of earthly life like God, to whom all things are equal. However, some argued that the highest stage, symbolized as the external worship of flowers, negates ignorance, ego, attachment, vanity, delusion, pride, calumniation, perturbation, jealousy, and greed, culminating in the five virtues of nonviolence (ahimsa), control of the senses, charity, forgiveness, and knowledge.
The worker caste of Sudras was divided into the clean and the untouchables, who were barred from the temples. There were a few domestic slaves and those sold to the temples. Brahmins were often given tax-free grants of land, and they were forbidden by caste laws to work in cultivation; thus the peasant Sudras provided the labor. The increasing power of the Brahmin landowners led to a decline of merchants and the Buddhists they often had supported.
Commentaries on the Laws of Manu by Medhatithi focused on such issues as the duty of the king to protect the people, their rights, and property. Although following the tradition that the king should take up cases in order of caste, Medhatithi believed that a lower caste suit should be taken up first if it is more urgent. Not only should a Brahmin be exempt from the death penalty and corporal punishment, he thought that for a first offense not even a fine should be imposed on a Brahmin. Medhatithi also held that in education the rod should only be used mildly and as a last resort; his attitude about a husband beating his wife was similar. Medhatithi believed that a woman's mind was not under her control, and that they should all be guarded by their male relations. He upheld the property rights of widows who had been faithful but believed the unfaithful should be cast out to a separate life. Widow suicide called sati was approved by some and criticized by others. During this period marriages were often arranged for girls before they reached the age of puberty, though self-choice still was practiced.
The Jain monk Somadeva in his Nitivakyamrita also wrote that the king must chastise the wicked and that kings being divine should be obeyed as a spiritual duty. However, if the king does not speak the truth, he is worthless; for when the king is deceitful and unjust, who will not be? If he does not recognize merit, the cultured will not come to his court. Bribery is the door by which many sins enter, and the king should never speak what is hurtful, untrustworthy, untrue, or unnecessary. The force of arms cannot accomplish what peace does. If you can gain your goal with sugar, why use poison? In 959 Somadeva wrote the romance Yashastilaka in Sanskrit prose and verse, emphasizing devotion to the god Jina, goodwill to all creatures, hospitality to everyone, and altruism while defending the unpopular practices of the Digambara ascetics such as nudity, abstaining from bathing, and eating standing up.
The indigenous Bon religion of Tibet was animistic and included the doctrine of reincarnation. Tradition called Namri Songtsen the 32nd king of Tibet. His 13-year-old son Songtsen Gampo became king in 630. He sent seventeen scholars to India to learn the Sanskrit language. The Tibetans conquered Burma and in 640 occupied Nepal. Songtsen Gampo married a princess from Nepal and also wanted to marry a Chinese princess, but so did Eastern Tartar (Tuyuhun) ruler Thokiki. According to ancient records, the Tibetans recruited an army of 200,000, defeated the Tartars, and captured the city of Songzhou, persuading the Chinese emperor to send his daughter to Lhasa in 641. Songtsen Gampo's marriage to Buddhist princesses led to his conversion, the building of temples and 900 monasteries, and the translation of Buddhist texts. His people were instructed how to write the Tibetan dialect with adapted Sanskrit letters. Songtsen Gampo died in 649, but the Chinese princess lived on until 680. He was succeeded by his young grandson Mangsong Mangtsen, and Gar Tongtsen governed as regent and conducted military campaigns in Asha for eight years. Gar Tongtsen returned to Lhasa in 666 and died the next year of a fever. A large military fortress was built at Dremakhol in 668, and the Eastern Tartars swore loyalty.
During a royal power struggle involving the powerful Gar ministers, Tibet's peace with China was broken in 670, and for two centuries their frontier was in a state of war. The Tibetans invaded the Tarim basin and seized four garrisons in Chinese Turkestan. They raided the Shanzhou province in 676, the year Mangsong Mangtsen died. His death was kept a secret from the Chinese for three years, and a revolt in Shangshong was suppressed by the Tibetan military in 1677. Dusong Mangje was born a few days after his royal father died. The Gar brothers led their armies against the Chinese. During a power struggle Gar Zindoye was captured in battle in 694; his brother Tsenyen Sungton was executed for treason the next year; and Triding Tsendro was disgraced and committed suicide in 699, when Dusong defeated the Gar army. Nepal and northern India revolted in 702, and two years later the Tibetan king was killed in battle. Tibetan sources reported he died in Nanzhao, but according to the Chinese he was killed while suppressing the revolt in Nepal.
Since Mes-Agtshom (also known as Tride Tsugtsen or Khri-Ide-btsug-brtan) was only seven years old, his grandmother Trimalo acted as regent. Mes-Agtshom also married a Chinese princess to improve relations; but by 719 the Tibetans were trading with the Arabs and fighting together against the Chinese. In 730 Tibet made peace with China and requested classics and histories, which the Emperor sent to Tibet despite a minister's warning they contained defense strategies. During a plague in 740-41 all the foreign monks were expelled from Tibet. After the imperial princess died in 741, a large Tibetan army invaded China. Nanzhao, suffering from Chinese armies, formed an alliance with Tibet in 750. Mes-Agtshom died in 755, according to Tibetan sources by a horse accident; but an inscription from the following reign accused two ministers of assassinating him. During Trisong Detsen's reign (755-97) Tibetans collected tribute from the Pala king of Bengal and ruled Nanzhao. In 763 a large Tibetan army invaded China and even occupied their capital at Chang'an. The Chinese emperor promised to send Tibet 50,000 rolls of silk each year; but when the tribute was not paid, the war continued. In 778 Siamese troops fought with the Tibetans against the Chinese in Sichuan (Szech'uan). Peace was made in 783 when China ceded much territory to Tibet. In 790 the Tibetans regained four garrisons in Anxi they had lost to Chinese forces a century before.
After Mashang, the minister who favored the Bon religion, was removed from the scene, Trisong Detsen sent minister Ba Salnang to invite the Indian pandit Shantirakshita to come from the university at Nalanda in Nepal. The people believed that Bon spirits caused bad omens, and Shantirakshita returned to Nepal. So Ba Salnang invited Indian Tantric master Padmasambhava, who was able to overcome the Bon spirits by making them take an oath to defend the Buddhist religion. Shantirakshita returned and supervised the building of a monastery that came to be known as Samye. He was named high priest of Tibet, and he introduced the "ten virtues." When Padmasambhava was unable to refute the instantaneous enlightenment doctrine of the Chinese monk Hoshang, Kamalashila was invited from India for a debate at Samye that lasted from 1792 until 1794. Kamalashila argued that enlightenment is a gradual process resulting from study, analysis, and good deeds. Kamalashila was declared the winner, and King Trisong Detsen declared Buddhism the official religion of Tibet.
Padmasambhava founded the red-hat Adi-yoga school and translated many Sanskrit books into Tibetan. A mythic account of his supernatural life that lasted twelve centuries was written by the Tibetan lady Yeshe Tsogyel. As his name implies, Padmasambhava was said to have been born miraculously on a lotus. His extraordinary and unconventional experiences included being married to 500 wives before renouncing a kingdom, several cases of cannibalism, surviving being burned at the stake, killing butchers, attaining Buddhahood, and teaching spirits and humans in many countries. In the guise of different famous teachers he taught people how to overcome the five poisons of sloth, anger, lust, arrogance, and jealousy.
The Tibetan Book of the Dead was first committed to writing around this time. Its title Bardol Thodol more literally means "liberation by hearing on the after-death plane." Similar in many ways to the Egyptian Book of the Dead, it likely contains many pre-Buddhist elements, as it was compiled over the centuries. The first part, chikhai bardo, describes the psychic experiences at the moment of death and urges one to unite with the all-good pure reality of the clear light. In the second stage of the chonyid bardo karmic illusions are experienced in a dream-like state, the thought-forms of one's own intellect. In the sidpa bardo, the third and last phase, one experiences the judgment of one's own karma; prayer is recommended, but instincts tend to lead one back into rebirth in another body. The purpose of the book is to help educate one how to attain liberation in the earlier stages and so prevent reincarnation.
Muni Tsenpo ruled Tibet from 797 probably to 804, although some believed he ruled for only eighteen months. He tried to reduce the disparity between the rich and poor by introducing land reform; but when the rich got richer, he tried two other reform plans. Padmasambhava advised him, "Our condition in this life is entirely dependent upon the actions of our previous life, and nothing can be done to alter the scheme of things."9 Muni Tsenpo had married his father's young wife to protect her from his mother's jealousy; but she turned against her son, the new king, and poisoned him; some believed he was poisoned because of his reforms. Since Muni Tsenpo had no sons, he was succeeded by his youngest brother Sadnaleg; his other brother Mutik Tsenpo was disqualified for having killed a minister in anger. During Sadnaleg's reign the Tibetans attacked the Arabs in the west, invading Transoxiana and besieging Samarqand; but they made an agreement with Caliph al-Ma'mun.
When Sadnaleg died in 815, his ministers chose his Buddhist son Ralpachen as king over his irreligious older brother Darma. After a border dispute, Buddhists mediated a treaty between Tibet and China in 821 that reaffirmed the boundaries of the 783 treaty. Ralpachen decreed that seven households should provide for each monk. By intrigues Darma managed to get his brother Tsangma and the trusted Buddhist minister Bande Dangka sent into exile; then Be Gyaltore and Chogro Lhalon, ministers who were loyal to Darma, went and murdered Bande Dangka. In 836 these same two pro-Bon ministers assassinated King Ralpachen and put Darma on the throne. They promulgated laws to destroy Buddhism in Tibet and closed the temples. Buddhist monks had to choose between marrying, carrying arms as hunters, becoming followers of the Bon religion, or death. In 842 the monk Lhalung Palgye Dorje assassinated King Darma with an arrow and escaped. That year marked a division in the royal line and the beginning of local rule in Tibet that lasted more than two centuries. Central Tibet suffered most from Darma's persecution, but Buddhism was kept alive in eastern and western Tibet. Buddhists helped Darma's son (r. 842-70) gain the throne, and he promoted their religion. As their empire disintegrated into separate warring territories, Tibetan occupation in Turkestan was ended by Turks, Uighurs, and Qarluqs.
In 978 translators Rinchen Zangpo and Lakpe Sherab invited some Indian pandits to come to Tibet, and this marked the beginning of the Buddhist renaissance in Tibet. Atisha (982-1054) was persuaded to come from India in 1042 and reformed the Tantric practices by introducing celibacy and a higher morality among the priests. He wrote The Lamp that Shows the Path to Enlightenment and founded the Katampa order, which was distinguished from the old Nyingmapa order of Padmasambhava. Drogmi (992-1074) taught the use of sexual practices for mystical realization, and his scholarly disciple Khon Konchog Gyalpo founded the Sakya monastery in 1073.
The Kagyupa school traces its lineage from the celestial Buddha Dorje-Chang to Tilopa (988-1069), who taught Naropa (1016-1100) in India. From a royal family in Bengal, Naropa studied in Kashmir for three years until he was fourteen. Three years later his family made him marry a Brahmin woman; they were divorced after eight years, though she became a writer too. In 1049 Naropa won a debate at Nalanda and was elected abbot there for eight years. He left to find the guru he had seen in a vision and was on the verge of suicide when Tilopa asked him how he would find his guru if he killed the Buddha. Naropa served Tilopa for twelve years during which he meditated in silence most of the time. However, twelve times he followed his guru's irrational suggestions and caused himself suffering. Each time Tilopa pointed out the lesson and healed him, according to the biography written about a century later. The twelve lessons taught him about the ordinary wish-fulfilling gem, one-valueness, commitment, mystic heat, apparition, dream, radiant light, transference, resurrection, eternal delight (learned from Tantric sex), mahamudra (authenticity), and the intermediate state (between birth and death). Naropa then went to Tibet where he taught Marpa (1012-96), who brought songs from the Tantric poets of Bengal to his disciple Milarepa.
Milarepa was born on the Tibetan frontier of Nepal in 1040. When he was seven years old, Milarepa's father died; his aunt and uncle taking control of the estate, his mother and he had to work as field laborers in poor conditions. When he came of age, his sister, mother, and he were thrown out of their house. So Milarepa studied black magic, and his mother threatened to kill herself if he failed. Milarepa caused the house to fall down, killing 35 people. Next his teacher taught him how to cause a hail storm, and at his mother's request he destroyed some crops. Milarepa repented of this sorcery and prayed to take up a religious life. He found his way to the lama Marpa the translator, who said that even if he imparted the truth to him, his liberation in one lifetime would depend on his own perseverance and energy. The lama was reluctant to give the truth to one who had done such evil deeds. So he had Milarepa build walls and often tear them down, while his wife pleaded for the young aspirant. Frustrated, Milarepa went to another teacher, who asked him to destroy his enemies with a hail storm, which he did while preserving an old woman's plot.
Milarepa returned to his guru Marpa and was initiated. Then he meditated in a cave for eleven months, discovering that the highest path started with a compassionate mood dedicating one's efforts to universal good, followed by clear aspiration transcending thought with prayer for others. After many years Milarepa went back to his old village to discover that his mother had died, his sister was gone, and his house and fields were in ruins. Describing his life in songs, Milarepa decided, "So I will go to gain the truth divine, to the Dragkar-taso cave I'll go, to practice meditation."10 He met the woman to whom he was betrothed in childhood, but he decided on the path of total self-abnegation. Going out to beg for food he met his aunt, who loosed dogs on him; but after talking he let her live in his house and cultivate his field. Milarepa practiced patience on those who had wronged him, calling it the shortest path to Buddhahood. Giving up comfort, material things, and desires for name or fame, he meditated and lived on nettles and water. He preached on the law of karma, and eventually his aunt was converted and devoted herself to penance and meditation. His sister found his nakedness shameful, but Milarepa declared that deception and evil deeds are shameful, not the body. Believing in karma, thoughts of the misery in the lower worlds may inspire one to seek Buddhahood.
It was said that Milarepa had 25 saints among his disciples, including his sister and three other women. In one of his last songs he wrote, "If pain and sorrow you desire sincerely to avoid, avoid, then, doing harm to others."11 Many miraculous stories are told of his passing from his body and the funeral; Milarepa died in 1123, and it was claimed that for a time no wars or epidemics ravaged the Earth. The biography of his life and songs was written by his disciple Rechung.
A contemporary of Milarepa, the life of Nangsa Obum was also told in songs and prose. She was born in Tibet, and because of her beauty and virtue she was married to Dragpa Samdrub, son of Rinang king Dragchen. She bore a son but longed to practice the dharma. Nangsa was falsely accused by Dragchen's jealous sister Ani Nyemo for giving seven sacks of flour to Rechung and other lamas. Beaten by her husband and separated from her child by the king, Nangsa died of a broken heart. Since her good deeds so outnumbered her bad deeds, the Lord of Death allowed her to come back to life. She decided to go practice the dharma; but her son and a repentant Ani Nyemo pleaded for her to stay. She remained but then visited her parents' home, where she took up weaving.
After quarreling with her mother, Nangsa left and went to study the sutras and practice Tantra. The king and her husband attacked her teacher Sakya Gyaltsen, who healed all the wounded monks. Then the teacher excoriated them for having animal minds and black karma, noting that Nangsa had come there for something better than a Rinang king; her good qualities would be wasted living with a hunter; they were trying to make a snow lion into a dog. The noblemen admitted they had made their karma worse and asked to be taught. Sakya replied that for those who have done wrong repentance is like the sun rising. They should think about their suffering and the meaninglessness of their lives and how much better they will be in the field of dharma. Dragchen and his father retired from worldly life, and Nangsa's 15-year-old son was given the kingdom.
Machig Lapdron (1055-1145) was said to be a reincarnation of Padmasambhava's consort Yeshe Tsogyel and of an Indian yogi named Monlam Drub. Leaving that body in a cave in India the soul traveled to Tibet and was born as Machig. As a child, she learned to recite the sutras at record speed, and at initiation she asked how she could help all sentient beings. In a dream an Indian teacher told her to confess her hidden faults, approach what she found repulsive, help those whom she thinks cannot be helped, let go of any attachment, go to scary places like cemeteries, be aware, and find the Buddha within. A lama taught her to examine the movement of her own mind carefully and become free of petty dualism and the demon of self-cherishing. She learned to wander and stay anywhere, and she absorbed various teachings from numerous gurus. She married and had three children but soon retired from the world. By forty she was well known in Tibet, and numerous monks and nuns came from India to challenge her; but she defeated them in debate. It was said that 433 lepers were cured by practicing her teachings.
A book on the supreme path of discipleship was compiled by Milarepa's disciple Lharje (1077-1152), who founded the Cur-lka monastery in 1150. This book lists yogic precepts in various categories. Causes of regret include frittering life away, dying an irreligious and worldly person, and selling the wise doctrine as merchandise. Requirements include sure action, diligence, knowledge of one's own faults and virtues, keen intellect and faith, watchfulness, freedom from desire and attachment, and love and compassion in thought and deed directed to the service of all sentient beings. "Unless the mind be disciplined to selflessness and infinite compassion, one is apt to fall into the error of seeking liberation for self alone."12 Offering to deities meat obtained by killing is like offering a mother the flesh of her own child. The virtue of the holy dharma is shown in those, whose heavy evil karma would have condemned them to suffering, turning to a religious life.
The black-hat Karmapa order was founded in 1147 by Tusum Khyenpa (1110-93), a native of Kham who studied with Milarepa's disciples. This sect claims to have started the system of leadership by successive reincarnations of the same soul, later adopted by the Dalai and Panchen Lamas. In 1207 a Tibetan council decided to submit peacefully to Genghis Khan and pay tribute. After the death of Genghis Khan in 1227, the Tibetans stopped paying the tribute, and the Mongols invaded in 1240, burning the Rating and Gyal Lhakhang monasteries and killing five hundred monks and civilians. In 1244 Sakya Pandita (1182-1251) went to Mongolia, where he initiated Genghis Khan's grandson Godan. Sakya Pandita instructed him in the Buddha's teachings and persuaded him to stop drowning the Chinese to reduce their population. Sakya Pandita was given authority over the thirteen myriarchies of central Tibet and told the Tibetan leaders it was useless to resist the Mongols' military power. He is also credited with devising a Mongolian alphabet. After Sakya Pandita died, the Mongols invaded Tibet in 1252. After Godan died, Kublai in 1254 invested Phagpa as the supreme ruler in Tibet by giving him a letter that recommended the monks stop quarreling and live peaceably. Phagpa conducted the enthronement of Kublai Khan in 1260. Phaga returned to Sakya in 1276 and died four years later.
In 1282 Dharmapala was appointed imperial preceptor (tishri) in Beijing. The Sakya administrator Shang Tsun objected to Kublai Khan's plans to invade India and Nepal, and the yogi Ugyen Sengge wrote a long poem against the idea, which Kublai Khan abandoned. After Tishri Dharmapala died in 1287, the myriarchy Drikhung attacked Sakya; but administrator Ag-len used troops and Mongol cavalry to defeat them, marching into Drikhung territory and burning their temple in 1290. Kublai Khan had been a patron of Buddhism in Tibet, but he died in 1295. After his death the influence of the Mongols in Tibet diminished.
Between 1000 and 1027 Ghazni ruler Mahmud invaded India with an army at least twelve times. About 15,000 Muslims took Peshawar and killed 5,000 Hindus in battle. Shahi king Jayapala was so ashamed of being defeated three times that he burned himself to death on a funeral pyre. In 1004 Mahmud's forces crossed the Indus River, then attacked and pillaged the wealth of Bhatiya. On the way to attack the heretical Abu-'l-Fath Daud, Mahmud defeated Shahi king Anandapala. Daud was forced to pay 20,000,000 dirhams and was allowed to rule as a Muslim if he paid 20,000 golden dirhams annually. Mahmud's army again met Anandapala's the next year; after 5,000 Muslims lost their lives, 20,000 Hindu soldiers were killed. Mahmud captured an immense treasure of 70,000,000 dirhams, plus gold and silver ingots, jewels, and other precious goods. After Mahmud defeated the king of Narayan and the rebelling Daud, Anandapala made a treaty that lasted until his death, allowing the Muslims passage to attack the sacred city of Thaneswar. In 1013 Mahmud attacked and defeated Anandapala's successor Trilochanapala, annexing the western and central portions of the Shahi kingdom in the Punjab. Next the Muslims plundered the Kashmir valley, though Mahmud was never able to hold it.
To attack Kanauj in the heart of India, Mahmud raised a force of 100,000 cavalry and 20,000 infantry. Most Hindu chiefs submitted, but in Mahaban nearly 5,000 were killed, causing Kulachand to kill himself. Next the Muslims plundered the sacred city of Mathura, destroying a temple that took two centuries to build and estimated to be worth 100,000,000 red dinars. After conquering more forts and obtaining more booty, Mahmud ordered the inhabitants slain by sword, the city plundered, and the idols destroyed in Kanauj that was said to contain almost 10,000 temples. In 1019 Mahmud returned to Ghazni with immense wealth and 53,000 prisoners to be sold as slaves.
When Mahmud's army returned again to chastise Chandella ruler Vidyadhara for killing the submitting Pratihara king Rajyapala, the resistance of Trilochanapala was overcome, making all of Shahi part of Mahmud's empire. Although he had 45,000 infantry, 36,000 cavalry, and 640 elephants, Vidyadhara fled after a minor defeat. The next year Mahmud and Vidyadhara agreed to a peace. 50,000 Hindus were killed in 1025 defending the Shaivite temple of Somanatha in Kathiawar, as Mahmud captured another 20,000,000 dirhams. In his last campaign Mahmud used a navy of 1400 boats with iron spikes to defeat the Jats with their 4,000 boats in the Indus. Mahmud's soldiers often gave people the choice of accepting Islam or death. These threats and the enslavement of Hindus by Muslims and the Hindus' consequent attitude of considering Muslims impure barbarians (mlechchha) caused a great division between these religious groups.
During this time Mahipala I ruled Bengal for nearly half a century and founded a second Pala empire. In the half century around 1100 Ramapala tried to restore the decreasing realm of the Palas by invading his neighbors until he drowned himself in grief in the Ganges. Buddhists were persecuted in Varendri by the Vangala army. In the 12th century Vijayasena established a powerful kingdom in Bengal; but in spite of the military victories of Lakshmanasena, who began ruling in 1178, lands were lost to the Muslims and others early in the 13th century.
Military campaigns led by the Paramara Bhoja and the Kalachuri Karna against Muslims in the Punjab discouraged Muslim invasions after Punjab governor Ahmad Niyaltigin exacted tribute from the Thakurs and plundered the city of Banaras in 1034. Bhoja and a Hindu confederacy of chiefs conquered Hansi, Thaneswar, Nagarkot, and other territories from the Muslims in 1043. Bhoja also wrote 23 books, patronized writers, and established schools for his subjects. Karna won many battles over various kingdoms in India but gained little material advantage. About 1090 Gahadavala ruler Chandradeva seems to have collaborated with the Muslim governor of the Punjab to seize Kanauj from Rashtrakuta ruler Gopala. In the first half of the 12th century Gahadavala ruler Govindachandra came into conflict with the Palas, Senas, Gangas, Kakatiyas, Chalukyas, Chandellas, Chaulukyas, the Karnatakas of Mithila, and the Muslims.
The Ghuzz Turks made Muhammad Ghuri governor of Ghazni in 1173; he attacked the Gujarat kingdom in 1178, but his Turkish army was defeated by the Chaulukya king Mularaja II. Chahamana Prithviraja III began ruling that year and four years later defeated and plundered Paramardi's Chandella kingdom. In 1186 Khusrav Malik, the last Yamini ruler of Ghazni, was captured at Lahore by Muhammad Ghuri. The next year the Chahamana king Prithviraja made a treaty with Bhima II of Gujarat. Prithviraja's forces defeated Muhammad Ghuri's army at Tarain and regained Chahamana supremacy over the Punjab. Muhammad Ghuri organized 120,000 men from Ghazni to face 300,000 led by Prithviraja, who was captured and eventually executed as the Muslims demolished the temples of Ajmer in 1192 and built mosques. From there Sultan Muhammad Ghuri marched to Delhi, where he appointed general Qutb-ud-din Aybak governor; then with 50,000 cavalry Muhammad Ghuri defeated the Gahadavala army of Jayachandra before leaving for Ghazni. Prithviraja's brother Hariraja recaptured Delhi and Ajmer; but after losing them again to Aybak, he burned himself to death in 1194.
Next the local Mher tribes and the Chaulukya king of Gujarat, Bhima II, expelled the Turks from Rajputana; but in 1197 Aybak invaded Gujarat with more troops from Ghazni, killing 50,000 and capturing 20,000. In 1202 Aybak besieged Chandella king Paramardi at Kalanjara and forced him to pay tribute. In the east a Muslim named Bakhtyar raided Magadha and used the plunder to raise a larger force that conquered much of Bengal; his army slaughtered Buddhist monks, thinking they were Brahmins. However, the Khalji Bakhtyar met tough resistance in Tibet and had to return to Bengal where he died. The Ghuri dynasty ended soon after Muhammad Ghuri was murdered at Lahore in 1206 by his former slave Aybak, who assumed power but died in 1210.
The struggle for power was won by Aybak's son-in-law Iltutmish, who defeated and killed Aybak's successor. Then in 1216 Iltutmish captured his rival Yildiz, who had been driven by Khwarezm-Shah from Ghazni to the Punjab; the next year he expelled Qabacha from Lahore. In 1221 Mongols led by Genghis Khan pushed Khwarezm-Shah and other refugees across the Indus into the Punjab. Iltutmish invaded Bengal and ended the independence of the Khalji chiefs; but he met with Guhilot resistance in Rajputana before plundering Bhilsa and Ujjain in Malwa. Chahadadeva captured and ruled Narwar with an army of over 200,000 men, defeating Iltutmish's general in 1234, but he was later defeated by the Muslim general Balban in 1251. After Qabacha drowned in the Indus, Iltutmish was recognized as the Baghdad Caliph's great sultan in 1229 until he died of disease seven years later.
Factional strife occurred as Iltutmish's daughter Raziyya managed to rule like a man for three years before being killed by sexist hostility; his sons, grandson, and the "Forty" officials, who had been his slaves, struggled for power and pushed back the invading Mongols in 1245. After Iltutmish's son Mahmud became king, the capable Balban gained control. In 1253 the Indian Muslim Raihan replaced Balban for a year until the Turks for racist reasons insisted Balban and his associates be restored. When Mahmud died childless in 1265, Balban became an effective sultan. He said, "All that I can do is to crush the cruelties of the cruel and to see that all persons are equal before the law."13 Mongols invaded again in 1285 and killed Balban's son; two years later the elderly Balban died, and in 1290 the dynasty of Ilbari Turks was replaced by the Khalji Turks with ties to Afghanistan.
Chola king Rajendra I (r. 1012-44) ruled over most of south India and even invaded Sumatra and the Malay peninsula. His son Rajadhiraja I's reign (1018-52) overlapped his father's, as he tried to put down rebellions in Pandya and Chera, invading western Chalukya and sacking Kalyana. Cholas were criticized for violating the ethics of Hindu warfare by carrying off cows and "unloosing women's girdles." Rajadhiraja was killed while defeating Chalukya king Someshvara I (r. 1043-68). In the Deccan the later Chalukyas battled their neighbors; led by Vikramaditya, they fought a series of wars against the powerful Cholas. After battling his brother Vikramaditya, Someshvara II reigned 1068-76; in confederacy with Chaulukya Karna of Gujarat, he defeated the Paramara Jayasimha and occupied Malava briefly. Becoming Chalukya king, Vikramaditya VI (r. 1076-1126) invaded the Cholas and took Kanchi some time before 1085.
When the Vaishnavites Mahapurna and Kuresha had their eyes put out, probably by Kulottunga I in 1079, the famous philosopher Ramanuja took refuge in the Hoysala country until Kulottunga died. Ramanuja modified Shankara's nondualism in his Bhasya and emphasized the way of devotion (bhakti). He believed the grace of God was necessary for liberation. Although he practiced initiations and rituals, Ramanuja recognized that caste, rank, and religion were irrelevant to realizing union with God. He provided the philosophical reasoning for the popular worship of Vishnu and was thought to be 120 when he died in 1137.
In Sri Lanka the Sinhalese harassed the occupying Chola forces until they withdrew from Rohana in 1030, enabling Kassapa VI (r. 1029-40) to govern the south. When he died without an heir, Cholas under Rajadhiraja (r. 1043-54) regained control of Rajarata. After 1050 a struggle for power resulted in Kitti proclaiming himself Vijayabahu I (r. 1055-1110). However, in 1056 a Chola army invaded to suppress the revolt in Rohana. Vijayabahu fled to the hills, and his army was defeated near the old capital of Anuradhapura; yet he recovered Rohana about 1061. The Chola empire was also being challenged by the western Chalukyas during the reign (1063-69) of Virarajendra. The new Chola king Kulottunga I (r. 1070-1120), after being defeated by Vijayabahu, pulled his forces out of Sri Lanka. Vijayabahu took over the north but had to suppress a rebellion by three brothers in 1075 near Polonnaruwa. After his envoys to the Chalukya king at Karnataka were mutilated, Vijayabahu invaded Chola around 1085; but he made peace with Kulottunga in 1088. Vijayabahu restored irrigation and centralized administration as he patronized Buddhism. Vijayabahu was succeeded by his brother Jayabahu I; but a year later Vikramabahu I (r. 1111-32) took control of Rajarata and persecuted monks while the sons of Vijayabahu's sister Mitta ruled the rest of Sri Lanka.
The Hoysala king Vinayaditya (r. 1047-1101) acknowledged Chalukya supremacy; but after his death, the Hoysalas tried to become independent by fighting the Chalukyas. Kulottunga ordered a land survey in 1086. The Cholas under Kulottunga invaded Kalinga in 1096 to quell a revolt; a second invasion in 1110 was described in the Kalingattupparani of court poet Jayangondar. After Vikramaditya VI died, Vikrama Chola (r. 1118-1135) regained Chola control over the Vengi kingdom, though the Chalukyas ruled the Deccan until the Kalachuri king Bijjala took Kalyana from Chalukya king Taila III in 1156; the Kalachuris kept control for a quarter century. Gujarat's Chalukya king Kumarapala was converted to Jainism by the learned Hemachandra (1088-1172) and prohibited animal sacrifices, while Jain king Bijjala's minister Basava (1106-67) promoted the Vira Shaiva sect that emphasized social reform and the emancipation of women. Basava disregarded caste and ritual as shackling and senseless. When an outcaste married an ex-Brahmin bride, Bijjala sentenced them both, and they were dragged to death in the streets of Kalyana. Basava tried to convert the extremists to nonviolence but failed; they assassinated Bijjala, and the Vira Shaivas were persecuted. Basava asked, "Where is religion without loving kindness?" Basava had been taught by Allama Prabhu, who had completely rejected external rituals, converting some from the sacrifice of animals to sacrificing one's bestial self.
In his poem, The Arousing of Kumarapala, which describes how Hemachandra converted King Kumarapala, Somaprabha warned Jains from serving the king as ministers, harming others and extorting their fortunes that one's master may take. In the mid-12th century the island of Sri Lanka suffered a three-way civil war. Ratnavali arranged for her son Parakramabahu to succeed childless Kitsirimegha in Dakkinadesa. Parakramabahu defeated and captured Gajabahu (r. 1132-53), taking over Polonnaruwa. However, his pillaging troops alienated the people who turned to Manabharana. Parakramabahu allied with Gajabahu, becoming his heir, and defeated Manabharana. Parakramabahu I (r. 1153-86) restored unity but harshly suppressed a Rohana rebellion in 1160 and crushed Rajarata resistance in 1168. He used heavy taxation to rebuild Pulatthinagara and Anuradhapura that had been destroyed by the Cholas. The Culavamsa credits Parakramabahu with restoring or building 165 dams, 3910 canals, 163 major tanks, and 2376 minor tanks. He developed trade with Burma. Sri Lanka aided a Pandya ruler in 1169 when Kulashekhara Pandya defeated and killed Parakrama Pandya, seizing Madura; but Chola king Rajadhiraja II (r. 1163-79) brought the Pandya civil war to an end. This enabled larger Chola armies to defeat the Sri Lanka force by 1174. Parakramabahu was succeeded by his nephew, who was slain a year later by a nobleman by trying to usurp the throne. Parakramabahu son-in-law Nissankamalla stopped that and ruled Sri Lanka for nine years. He also was allied with the Pandyas and fought the Cholas.
During the next eighteen years Sri Lanka had twelve changes of rulers, though Nissankamalla's queen, Kalyanavati reigned 1202-08. Four Chola invasions further weakened Sri Lanka. Queen Lilavati ruled three different times and was supported by the Cholas. In 1212 the Pandyan prince Parakramapandu invaded Rajarata and deposed her; but three years later the Kalinga invader Magha took power. The Culavamsa criticized Magha (r. 1215-55) for confiscating the wealth of the monasteries, taxing the peasants, and letting his soldiers oppress the people. Finally the Sinhalese alliance with the Pandyas expelled Magha and defeated the invasions by Malay ruler Chandrabanu. When his son came again in 1285, the Pandyan general Arya Chakravarti defeated him and ruled the north, installing Parakramabahu III (r. 1287-93) as his vassal at Polonnaruwa. Eventually the capital Polonnaruwa was abandoned; the deterioration of the irrigation system became irreversible as mosquitoes carrying malaria infested its remains. The Tamil settlers withdrew to the north, developing the Jaffna kingdom. Others settled in the wet region in the west, as the jungle was tamed.
Hoysala king Ballala II proclaimed his independence in 1193. Chola king Kulottunga III (r. 1178-1216) ravaged the Pandya country about 1205, destroying the coronation hall at Madura; but a few years later he was overpowered by the Pandyas and saved from worse defeat by Hoysala intervention, as Hoysala king Ballala II (r. 1173-1220) had married a Chola princess. In the reign (1220-34) of Narasimha II the Hoysalas fought the Pandyas for empire, as Chola power decreased. Narasimha's son Someshvara (r. 1234-63) was defeated and killed in a battle led by Pandya Jatavarman Sundara. Chola king Rajendra III (r. 1246-79) was a Pandyan feudatory from 1258 to the end of his reign. The Cholas had inflicted much misery on their neighbors, even violating the sanctity of ambassadors. The Pandyas under their king Maravarman Kulashekhara, who ruled more than forty years until 1310, overcame and annexed the territories of the Cholas and the Hoysalas in 1279 and later in his reign gained supremacy over Sri Lanka.
The dualist Madhva (1197-1276) was the third great Vedanta philosopher after Shankara and Ramanuja. Madhva also opened the worship of Vishnu to all castes but may have picked up the idea of damnation in hell from missionary Christians or Muslims. He taught four steps to liberation: 1) detachment from material comforts, 2) persistent devotion to God, 3) meditation on God as the only independent reality, and 4) earning the grace of God.
Marco Polo on his visit to south India about 1293 noted that climate and ignorant treatment did not allow horses to thrive there. He admired Kakatiya queen Rudramba, who ruled for nearly forty years. He noted the Hindus' strict enforcement of justice against criminals and abstention from wine, but he was surprised they did not consider any form of sexual indulgence a sin. He found certain merchants most truthful but noted many superstitious beliefs. Yet he found that ascetics, who ate no meat, drank no wine, had no sex outside of marriage, did not steal, and never killed any creature, often lived very long lives. Marco Polo related a legend of brothers whose quarrels were prevented from turning to violence by their mother who threatened to cut off her breasts if they did not make peace.
Nizam-ud-din Auliya was an influential Sufi of the Chishti order that had been founded a century before. He taught love as the means to realize God. For Auliya universal love was expressed through love and service of humanity. The Sufis found music inflamed love, and they interpreted the Qur'an broadly in esoteric ways; the intuition of the inner light was more important to them than orthodox dogma. Auliya was the teacher of Amir Khusrau (1253-1325), one of the most prolific poets in the Persian language. Many of Khusrau's poems, however, glorified the bloody conquests of the Muslim rulers so that "the pure tree of Islam might be planted and flourish" and the evil tree with deep roots would be torn up by force. He wrote,
The whole country, by means of the sword of our holy warriors,
has become like a forest denuded of its thorns by fire.
The land has been saturated with the water of the sword,
and the vapors of infidelity have been dispersed.
The strong men of Hind have been trodden under foot,
and all are ready to pay tribute.
Islam is triumphant; idolatry is subdued.
Had not the law granted exemption from death
by the payment of poll-tax,
the very name of Hind, root and branch,
would have been extinguished.
From Ghazni to the shore of the ocean
you see all under the dominion of Islam.14
In 1290 the Khalji Jalal-ud-din Firuz became sultan in Delhi but refused to sacrifice Muslim lives to take Ranthambhor, though his army defeated and made peace with 150,000 invading Mongols. Genghis Khan's descendant Ulghu and 4,000 others accepted Islam and became known as the "new Muslims." This lenient sultan sent a thousand captured robbers and murderers to Bengal without punishment. His more ambitious nephew 'Ala-ud-din Khalji attacked the kingdom of Devagiri, gaining booty and exacting from Yadava king Ramachandra gold he used to raise an army of 60,000 cavalry and as many infantry. In 1296 he lured his uncle into a trap, had him assassinated, and bribed the nobles to proclaim him sultan. Several political adversaries were blinded and killed. The next year 'Ala-ud-din sent an army headed by his brother Ulugh Khan to conquer Gujarat; according to Wassaf they slaughtered the people and plundered the country. Another 200,000 Mongols invaded in 1299, but they were driven back. Revolts by his nephews and an old officer were ruthlessly crushed. Money was extorted; a spy network made nobles afraid to speak in public; alcohol was prohibited; and gatherings of nobles were restricted. Orders were given that Hindus were not to have anything above subsistence; this prejudicial treatment was justified by Islamic law.
In addition to his three plays we also have four poems by Kalidasa. The Dynasty of Raghu is an epic telling the story not only of Rama but of his ancestors and descendants. King Dilipa's willingness to sacrifice himself for a cow enables him to get a son, Raghu. Consecrated as king, Raghu tries to establish an empire with the traditional horse sacrifice in which a horse for a year is allowed to wander into other kingdoms, which must either submit or defend themselves against his army. His son Aja is chosen by the princess Indumati. Their son Dasharatha has four sons by three wives; but for killing a boy while hunting, he must suffer the banishment of his eldest son Rama, whose traditional story takes up a third of the epic. His son Kusha restores the capital at Ayodhya; but after a line of 22 kings Agnivarna becomes preoccupied with love affairs before dying and leaving a pregnant queen ruling as regent.
Another epic poem, The Birth of the War-god tells how the ascetic Shiva is eventually wooed by Parvati, daughter of the Himalaya mountains, after the fire from Shiva's eye kills the god of Love and she becomes an ascetic. After being entertained by nymphs, Shiva restores the body of Love. Their son Kumara is made a general by the god Indra; after their army is defeated by Taraka's army, Kumara kills the demon Taraka. Kalidasa's elegy, The Cloud-Messenger, describes how the Yaksha Kubera, an attendant of the god of Wealth, who has been exiled from the Himalayas to the Vindhya mountains for a year, sends a cloud as a messenger to his wife during the romantic rainy season. Kalidasa is also believed to be the author of a poem on the six seasons in India.
Bana wrote an epic romance on the conquests of Harsha in the 7th century and another called Kadambari. Bana was not afraid to criticize the idea of kings being divine nor the unethical and cruel tactics of the political theorist Kautilya. Bana was one of the few Indian writers who showed concern for the poor and humble.
About the 6th or 7th century Bhartrihari wrote short erotic poems typical of those later collected into anthologies. He reminded himself that virtue is still important.
Granted her breasts are firm, her face entrancing,
Her legs enchanting - what is that to you?
My mind, if you would win her, stop romancing.
Have you not heard, reward is virtue's due?15
Torn between sensual and spiritual love, Bhartrihari found that the charms of a slim girl disturbed him. Should he choose the youth of full-breasted women or the forest? Eventually he moved from the dark night of passion to the clear vision of seeing God in everything. He noted that it is easier to take a gem from a crocodile's jaws or swim the ocean or wear an angry serpent like a flower in one's hair or squeeze oil from sand, water from a mirage, or find a rabbit's horn than it is to satisfy a fool whose opinions are set. Bhartrihari asked subtle questions.
Patience, better than armor, guards from harm.
And why seek enemies, if you have anger?
With friends, you need no medicine for danger.
With kinsmen, why ask fire to keep you warm?
What use are snakes when slander sharper stings?
What use is wealth where wisdom brings content?
With modesty, what need for ornament?
With poetry's Muse, why should we envy kings?16
The erotic poetry of Amaru about the 7th century often expressed the woman's viewpoint. When someone questioned her pining and faithfulness, she asked him to speak softly because her love living in her heart might hear. In another poem the narrator tries to hide her blushing, sweating cheeks but found her bodice splitting of its own accord. This poet seemed to prefer love-making to meditation. The erotic and the religious were combined in 12th century Bengali poet Jayadeva's "Songs of the Cowherd" (Gita Govinda) about the loves of Krishna. A poet observed that most people can see the faults in others, and some can see their virtues; but perhaps only two or three can see their own shortcomings.
In the late 11th century Buddhist scholar Vidyakara collected together an anthology of Sanskrit court poetry, Treasury of Well-Turned Verse (Subhasitaratnakosa), with verses from more than two hundred poets, mostly from the previous four centuries. Although it begins with verses on the Buddha and the bodhisattvas Lokesvara and Manjughosa, Vidyakara also included verses on Shiva and Vishnu. One poet asked why a naked ascetic with holy ashes needed a bow or a woman. (103) After these chapters the poetry is not religious, with verses on the seasons and other aspects of nature. Love poetry is ample, and it is quite sensual, though none of it is obscene. Women's bodies are described with affection, and sections include the joys of love as well as the sad longing of love-in-separation. An epigram complains of a man whose body smells of blood as his action runs to slaughter because his sense of right and wrong is no better than a beast's. Only courage is admired in a lion, but that makes the world seem cheap. (1091) Another epigram warns that the earth will give no support nor a wishing tree a wish, and one's efforts will come to nothing for one whose sin accumulated in a former birth. (1097) Shardarnava described peace in the smooth flow of a river; but noting uprooted trees along the shore, he inferred concealed lawlessness. (1111)
Dharmakirti's verses describe the good as asking no favors from the wicked, not begging from a friend whose means are small, keeping one's stature in misfortune, and following in the footsteps of the great, though these rules may be as hard to travel as a sword blade. (1213) Another poet found that he grew mad like a rutting elephant when knowing little he thought he knew everything; but after consorting with the wise and gaining some knowledge, he knew himself a fool, and the madness left like a fever. (1217) Another proclaimed good one who offers aid to those in distress, not one who is skillful at keeping ill-gotten gains. (1226) A poet noted that countless get angry with or without a cause, but perhaps only five or six in the world do not get angry when there is a cause. (1236) The great guard their honor, not their lives; fear evil, not enemies; and seek not wealth but those who ask for it. (1239) Small-minded people ask if someone is one of them or an outsider, but the noble mind takes the whole world for family. (1241) An anonymous poet asked these great questions:
Can that be judgment where compassion plays no part,
or that be the way if we help not others on it?
Can that be law where we injure still our fellows,
or that be sacred knowledge which leads us not to peace?17
A poet advised that the wise, considering that youth is fleeting, the body soon forfeited and wealth soon gone, lays up no deeds, though they be pleasurable here, that will ripen into bitter fruit in future lives. (1686)
Although collected from ancient myths and folklore, the eighteen "great" Puranas were written between the 4th and 10th centuries. Originally intended to describe the creation of the universe, its destruction and renewal, genealogies, and chronicles of the lawgivers and the solar and lunar dynasties, they retold myths and legends according to different Vaishnavite and Shaivite sects with assorted religious lore. The Agni Puranam, for example, describes the avatars Rama and Krishna, religious ceremonies, Tantric rituals, initiation, Shiva, holy places, duties of kings, the art of war, judicature, medicine, worship of Shiva and the Goddess, and concludes with a treatise on prosody, rhetoric, grammar, and yoga. Much of this was apparently taken from other books.
The early Vishnu Purana explains that although all creatures are destroyed at each cosmic dissolution, they are reborn according to their good or bad karma; this justice pleased the creator Brahma. In this Purana Vishnu becomes the Buddha in order to delude the demons so that they can be destroyed. The gods complain that they cannot kill the demons because they are following the Vedas and developing ascetic powers. So Vishnu says he will bewitch them to seek heaven or nirvana and stop evil rites such as killing animals. Then reviling the Vedas, the gods, the sacrificial rituals, and the Brahmins, they went on the wrong path and were destroyed by the gods. The Vishnu Purana describes the incarnations of Vishnu, including his future life as Kalkin at the end of the dark age (Kali yuga) when evil people will be destroyed, and justice (dharma) will be re-established in the Krita age. The gradual ethical degeneration is reflected in the change in Hindu literature from the heroic Vedas to the strategic epics and then to deception and demonic methods in the Puranas. The Padma Purana explains the incarnations of Vishnu as fulfilling a curse from lord Bhrigu, because Vishnu killed his wife. Thus Vishnu is born again and again for the good of the world when virtue has declined. By appearing as a naked Jain and the Buddha, Vishnu has turned the demons away from the Vedas to the virtue (dharma) of the sages.
The most popular of all the Puranas, the Srimad Bhagavatam was attributed to the author of the Mahabharata, Vyasa, given out through his son Suta. However, scholars consider this work emphasizing the way of devotion (bhakti) one of the later great Puranas and ascribe it to the grammarian Vopadeva. Bhagavatam retells the stories of the incarnations of the god Vishnu with special emphasis on Krishna. Even as a baby and a child the divine Krishna performs many miracles and defeats demons. The young Krishna is not afraid to provoke the wrath of the chief god Indra by explaining that happiness and misery, fear and security, result from the karma of one's actions. Even a supreme Lord must dispense the fruits of others' karma and thus is dependent on those who act. Thus individuals are controlled by their dispositions they have created by their former actions. Karma, or we might say experience, is the guru and the supreme Lord. Brahmins should maintain themselves by knowledge of the Veda, Kshatriyas by protecting the country, Vaishyas by business, and Sudras by service. Krishna also notes that karma based on desire is the product of ignorance, of not understanding one's true nature.
The king who is listening to the stories of Krishna asks how this Lord could sport with other men's wives; but the author excuses these escapades by explaining that although the superhuman may teach the truth, their acts do not always conform to their teachings. The intelligent understand this and follow only the teachings. The worshiping author places the Lord above good and evil and claims that the men of Vajra did not become angry at Krishna because they imagined their wives were by their sides all the time. Krishna also fought and killed many enemies, "as the lord of the jungle kills the beasts."18 He killed Kamsa for unjustly appropriating cows. Krishna fought the army of Magadha king Jarasandha seventeen times and presented the spoils of war to the Yadu king. He killed Satadhanva over a gem. Krishna carried off by force and thus wed Rukmini by the demon mode. Several other weddings followed, and Krishna's eight principal queens were said to have bore him ten sons each. The author claimed he had 16,000 wives and lived with them all at the same time in their own apartments or houses.
In the 18th battle Jarasandha's army finally defeated Krishna's, and it was said that he captured 20,800 kings; but Krishna got Bhima to kill Jarasandha, and all the confined Kshatriyas were released. Krishna cut off the head of his foe Sishupala with his razor-sharp discus; he also destroyed the Soubha and killed Salva, Dantavakra and his brother. Although the methods of action (karma) and knowledge (jnani) are discussed in relation to Samkhya philosophy and yoga, in the Bhagavatam the practice of devotion (bhakti) to God in the form of Krishna is favored as the supreme means of salvation. The great war between the Kurus and the Pandavas is explained as Krishna's way of removing the burden of the Earth. Krishna tells his own people, the Yadus, to cross the sea to Prabhasa and worship the gods, Brahmins, and cows. There rendered senseless by Krishna's illusion (maya), they indulge in drink and slaughter each other. Krishna's brother Balarama and he both depart from their mortal bodies, Krishna ascending to heaven with his chariot and celestial weapons.
Before the 11th century seventy stories of "The Enchanted Parrot" were employed to keep a wife entertained while her husband was away so that she would not find a lover. A charming parrot satirizes women, comparing them to kings and serpents in taking what is near them. The proverb is quoted that when the gods want to ruin someone they first take away one's sense of right and wrong, and the listener is warned not to set one's heart on riches gained by wickedness nor on an enemy one has humiliated. When the husband returns, the parrot is freed from the curse and flies to heaven amid a rain of flowers.
In the late 11th century Somadeva added to the Great Story (Brihat-katha) of Gunadhya to make the Ocean of the Streams of Story (Katha-sarit-sagara) collection of more than 350 stories in Sanskrit verse. The author noting that jealousy interferes with discernment, a king orders a Brahmin executed for talking with his queen; but on the way to his punishment, a dead fish laughs because while so many men are dressed as women in the king's harem an innocent Brahmin is to be killed. The narrator tells the king this and gains respect for his wisdom and release for the Brahmin. The author also notes that for the wise, character is wealth. Somadeva recounts the legendary stories of Vatsa king Udayana and his marriages to Vasavadatta and the Magadhan princess Padmavati. The former is commended for cooperating in the separation in Yaugandharayana's scheme; he says she is a real queen because she does not merely comply with her husband's wishes but cares for his true interests.
An eminent merchant sends his son to a courtesan to learn to beware of immorality incarnate in harlots, who rob rich young men blinded by their virility. Like all professionals, the prostitute has her price but must guard against being in love when no price is paid. She must be a good actress in seducing and milking the man of his money, deserting him when it is gone, and taking him back when he comes up with more money. Like the hermit, she must learn to treat them all equally whether handsome or ugly. Nonetheless the son is taken in by a courtesan and loses all his money, but he contrives to get it back by using a monkey trained to swallow money and give it back on cue.
From Somadeva also comes the Vampire's Tales of "The King and the Corpse." In an unusual frame for 25 stories a king is instructed to carry a hanged corpse inhabited by a vampire, who poses a dilemma at the conclusion of each tale. For example, when heads are cut off and are put back on each other's bodies, which person is which? After becoming orphans the oldest of four Brahmin brothers tries to hang himself; but he is cut down and saved by a man who asks him why a learned person should despair when good fortune comes from good karma and bad luck from bad karma. The answer to unhappiness, then, is doing good; but to kill oneself would bring the suffering of hell. So the brothers combine their talents to create a lion from a bone; but the lion kills them, as their creation was not intelligent but evil. The last brother, who brought the lion's completed body to life, is judged most responsible by the king because he should have been more aware of what would result.
1. Prince Ilango Adigal, Shilappadikaram, tr. Alain
Daniélou, p. 202.
2. Tiruvalluvar, The Kural tr. P. S. Sundarum, 99.
3. Ibid., 311-320.
4. Ibid., 981-990.
5. Shankara, Crest-Jewel of Wisdom tr. Mohini M. Chatterji, 58.
6. Bhasa, Avimaraka tr. J. L. Masson and D. D. Kosambi, p. 73.
7. Ibid., p. 130-131.
8. Kalidasa, Shakuntala tr. Michael Coulson, 1:11.
9. Tibet's Great Yogi Milarepa tr. Kazi Dawa-Samdup, p. 176.
10. Ibid., p. 253.
11. Tibetan Yoga and Secret Doctrines tr. Kazi Dawa-Samdup, p. 75.
12. Majumdar, R. C., An Advanced History of India, p. 292.
13. Speaking of Shiva tr. A. K. Ramanujan, p. 54.
14. Elliot, H. M., The History of India as Told by Its Own Historians, Vol. 3, p. 546.
15. Poems from the Sanskrit tr. John Brough, p. 58.
16. Ibid., p. 71.
17. An Anthology of Sanskrit Court Poetry tr. Daniel H. H. Ingalls, 1629.
18. Srimad Bhagavatam tr. N. Raghunathan, 10:44:40, Vol. 2 p. 321.
This chapter has been published in the book INDIA & Southeast Asia to 1800.
For ordering information, please click here. | http://www.san.beck.org/AB2-India.html | 13 |
29 | The Constitution of the United States of America, written well over 200 years ago, has been the foundation for building one of the great nations. It is the central instrument of American government and the supreme law of the land. For more than 200 years, it has guided the evolution of U.S. governmental institutions and has provided the basis for political stability, individual freedom, economic growth and social progress.
However, the birth of the Constitution is not accidental, but has complicated economic and political backgrounds. The period after the Revolutionary War was characterized by economic depression and political crisis on the grounds that the Articles of Confederation just devised a loose association among the states, and set up a central government with very limited powers. The central government could not get the dominant position in the country’s political life while the individual states could do things in their own ways. In this chaotic situation, the central government was incapable of paying its debt, of regulating foreign and domestic commerce, of maintaining a steady value of the currency, and worst of all, incapable of keeping a strong military force to protect the country’s interests from foreign violations. As time went by, the old system became more and more adverse to the development of the young nation, and political reform seemed to be inevitable. The best solution was to draw up a new constitution in place of the Articles of Confederation.
The Constitution was drawn up by 55 delegates of twelve states (all but Rhode Island) to the Constitutional Convention in Philadelphia during the summer of 1787 and ratified by the states in 1788. That distinguished gathering at Philadelphia’s Independence Hall brought together nearly all of the nation’s most prominent men, including George Washington, James Madison, Alexander Hamilton and Benjamin Franklin. Many were experienced in colonial and state government and others had records of service in the army and in the courts. As Thomas Jefferson wrote John Adams when he heard who had been appointed: “It is really an assembly of demigods.”
Despite the consensus among the framers on the objectives of the Constitution, the controversy over the means by which those objectives could be achieved was lively. However, most of the issues were settled by the framers’ efforts and compromises, thus the finished Constitution has been referred to as a “bundle of compromises”. It was only through give-and-take that a successful conclusion was achieved. Such efforts and compromises in the Constitutional Convention of 1787 produced the most enduring written Constitution ever created by humankinds. The men who were at Philadelphia that hot summer hammered out a document defining distinct powers for the Congress of the United States, the president, and the federal courts. This division of authority is known as a system of checks and balances, and it ensures that none of the branches of government can dominate the others. The Constitution also establishes and limits the authority of the Federal Government over the states and emphasizes that power of the states will serve as a check on the power of the national government.
Separation of Powers in the Central Government
One important principle embodied in the U.S. Constitution is separation of powers. To prevent concentration of power, the U.S. Constitution divides the central government into three branches and creates a system of checks and balances. Each of the three governmental branches, legislative, executive and judicial, “checks” the powers of the other branches to make sure that the principal powers of the government are not concentrated in the hands of any single branch. The principle of separation of powers and the system of checks and balances perform essential functions and contribute to a stable political situation in the United States.
1. Theory of Separation of Powers
The principle of separation of powers dates back as far as Aristotle’s time. Aristotle favored a mixed government composed of monarchy, aristocracy, and democracy, seeing none as ideal, but a mix of the three useful by combining the best aspects of each. James Harrington, in his 1656 Oceana, brought these ideas up-to-date and proposed systems based on the separation of power.
Many of the framers of the U.S. Constitution, such as Madison, studied history and political philosophy. They greatly appreciated the idea of separation of power on the grounds of their complex views of governmental power. Their experience with the Articles of Confederation taught them that the national government must have the power needed to achieve the purposes for which it was to be established. At the same time, they were worried about the concentration of power in one person’s hands. As John Adams wrote in his A Defense of the Constitution of Government of the United States of America (1787), “It is undoubtedly honorable in any man, who has acquired a great influence, unbounded confidence, and unlimited power, to resign it voluntarily; and odious to take advantage of such an opportunity to destroy a free government: but it would be madness in a legislator to frame his policy upon a supposition that such magnanimity would often appear. It is his business to contrive his plan in such a manner that such unlimited influence, confidence, and power, shall never be obtained by any man.” (Isaak 2004:100) Such worries compelled the framers to find a good way to establish a new government, thus separation of powers and a balanced government became a good choice.
Two political theorists had great influence on the creation of the Constitution. John Locke, an important British political philosopher, had a large impact through his Second Treatise of Government (1690). Locke argued that sovereignty resides in individuals, not rulers. A political state, he theorized, emerged from a social contract among the people, who consent to government in order to preserve their lives, liberties, and property. In the words of the Declaration of Independence, which also drew heavily on Locke, governments derive “their just powers from the consent of the governed.” Locke also pioneered the idea of the separation of powers, and he separated the powers into an executive and a legislature. The French political philosopher Baron de Montesquieu, another major intellectual influence on the Constitution, further developed the concept of separation of powers in his treatise The Spirit of the Laws (1748), which was highly regarded by the framers of the U.S. Constitution. Montesquieu’s basic contention was that those entrusted with power tend to abuse it; therefore, if governmental power is fragmented, each power will operate as a check on the others. In its usual operational form, one branch of government (the legislative) is entrusted with making laws, a second (the executive) with executing them, and a third (the judiciary) with resolving disputes in accordance with the law.
Based on the theory of Baron de Montesquieu and John Locke, the framers carefully spelled out the independence of the three branches of government: executive, legislative, and judicial. At the same time, however, they provided for a system in which some powers should be shared: Congress may pass laws, but the president can veto them; the president nominates certain public officials, but Congress must approve the appointments; and laws passed by Congress as well as executive actions are subject to judicial review. Thus the separation of powers is offset by what are called checks and balances.
2. Separation of Powers among Three Governmental Branches
Separation of powers devised by the framers of the U.S. Constitution serves the goals: to prevent concentration of power and provide each branch with weapons to fight off encroachment by the other two branches. As James Madison argued in the Federalist Papers (No.51), “Ambition must be made to counteract ambition.” Clearly, the system of separated powers is not designed to maximize efficiency; it is designed to maximize freedom. In the Constitution of the United States, the Legislative, composed of the House and Senate, is set up in Article 1; the Executive, composed of the President, Vice-President, and the Departments, is set up in Article 2; the Judicial, composed of the federal courts and the Supreme Court, is set up in Article 3. Each of these branches has certain powers, and each of these powers is limited.
The First Article of the U.S. Constitution says, “All legislative powers herein granted shall be vested in a Congress of the United States, which shall consist of a Senate and House of Representatives.” These words clearly define the most important power of Congress: to legislate for the United States. At the same time, the framers granted some specific powers to Congress. Congress has the power to impeach both executive officials and judges. The Senate tries all impeachments. Besides, Congress can override a Presidential veto. Congress may also influence the composition of the judicial branch. It may establish courts inferior to the Supreme Court and set their jurisdiction. Furthermore, Congress regulates the size of the courts. Judges are appointed by the President with the advice and consent of the Senate. The compensation of executive officials and judges is determined by Congress, but Congress may not increase or diminish the compensation of a President, or diminish the compensation of a judge, during his term in office. Congress determines its own members’ emoluments as well. In short, the main powers of the Legislature include: Legislating all federal laws; establishing all lower federal courts; being able to override a Presidential veto; being able to impeach the President as well as other executive officials.
Executive power is vested in the President by the U.S. Constitution in Article 2. The principal responsibility of the President is to ensure that all laws are faithfully carried out. The President is the chief executive officer of the federal government. He is the leader of the executive branch and the commander in chief of the armed forces. He has the power to make treaties with other nations, with the advice and consent of two-thirds of the Senate. The President also appoints, with Senate consent, diplomatic representatives, Supreme Court judges, and many other officials. Except impeachment, he also has the power to issue pardons and reprieves. Such pardons are not subject to confirmation by either house of Congress, or even to acceptance by the recipient. Another important power granted to the President is veto power over all bills, but Congress, as noted above, may override any veto except for a pocket veto by a two-thirds majority in each house. When the two houses of Congress cannot agree on a date for adjournment, the President may settle the dispute. Either house or both houses may be called into emergency session by the President.
The judicial power—the power to decide cases and controversies—is vested in the Supreme Court and inferior court established by Congress. The following are the powers of the Judiciary: the power to try federal cases and interpret the laws of the nation in those cases; the power to declare any law or executive act unconstitutional. The power granted to the courts to determine whether legislation is consistent with the Constitution is called judicial review. The concept of judicial review is not written into the Constitution, but was envisioned by many of the framers. The Supreme Court established a precedent for judicial review in Marbury v. Madison. The precedent established the principle that a court may strike down a law it deems unconstitutional.
3. Checks and Balances
The framers of the U.S. Constitution saw checks and balances as essential for the security of liberty under the Constitution. They believed that by balancing the powers of the three governmental branches, the efforts in human nature toward tyranny could be checked and restrained. John Adams praised the balanced government as the “most stupendous fabric of human invention.” In his A Defense of the Constitution of Government of the United States of America (1787), he wrote, “In the mixed government we contend for, the ministers, at least of the executive power, are responsible for every instance of the exercise of it; and if they dispose of a single commission by corruption, they are responsible to a house of representatives, who may, by impeachment, make them responsible before a senate, where they may be accused, tried, condemned, and punished, by independent judges.” (Isaak 2004:103-104) So the system of checks and balances was established and became an important part of the U.S. Constitution. With checks and balances, each of the three branches of government can limit the powers of the others. This way, no one branch is too powerful. Each branch “checks” the powers of the other branches to make sure that the power is balanced between them. The major checks possessed by each branch are listed below.
By distributing the essential powers of the government among three separate but interdependent branches, the Constitutional framers ensured that the principal powers of the government, legislative, executive and judicial, were not concentrated in the hands of any single branch. Allocating governmental authority among three separate branches also prevented the formation of too strong a national government capable of overpowering the individual state governments. In order to modify the separation of powers, the framers created a best-known system—checks and balances. In this system, powers are shared among the three branches of government. At the same time, the powers of one branch can be challenged by another branch. As one of the basic doctrines in the U.S. Constitution, separation of powers and a system of checks and balances contribute to a stable political situation in the United States.
Separating Powers between the Federal Government and the States
As is mentioned above, the United States was in a chaotic state after the American Revolution. Under the Articles of Confederation, all of the thirteen states only had a kind of very loose connection. They were like thirteen independent countries, and could do things in their own ways. They had their own legal systems and constitutions, made their own economic, trade, tax and even monetary policies, and seldom accepted any orders from the central government. Localism made the state congresses set barriers to goods from other states, thus trade between states could not develop. At the same time, the central government did not have any important powers to control the individual states well. As time went by, the old system became more and more adverse to the stability and development of this young country. Many Americans viewed a number of grave problems as arising from the weakness of the Confederation. They thought the Confederation was so weak that it was in danger of falling apart under either foreign or internal pressures. They appealed for reforming the governmental structure and establishing a stronger central government. This government should have some positive powers so that it could make and carry out policies to safeguard state sovereignty against foreign violations and to protect the people’s interests. This idea was embodied in the U.S. Constitution: The powers of the national government and the states were divided. The central government was specifically granted certain important powers while the power of the state governments was limited, and there were certain powers that they shared.
All those powers granted to the Federal Government by the U.S. Constitution are enumerated principally as powers of Congress in Article I, Section 8. These powers can be classified as either economic or military. As is known to all, economic and military power are fundamental and essential to a government. Possessing such powers, the U.S. central government was capable of controlling the country well, thus keeping up a stable political situation and promoting the economic development.
Economic powers delegated to the Federal Government include the authority to levy taxes, borrow money, regulate commerce, coin money, and establish bankruptcy laws. In Article I, Section 8, the Constitution writes, “The Congress shall have power to lay and collect taxes, duties, imposts and excises, to pay the debts and provide for the common defense and general welfare of the United State; …to borrow money on the credit of the United States; to regulate commerce with foreign nations, and among the several States, and with the Indian tribes; to establish a uniform rule of naturalization, and uniform laws on the subject of bankruptcies throughout the United States; to coin money, regulate the value thereof, and of foreign coin, and fix the standard of weights and measures.” According to this stipulation, the Federal Government has gathered the most important economic power into its own hands: with the right to collect taxes directly, the Federal Government could pay its debt and provide funds for the nation’s common defense and general welfare; with the right to issue uniform currency and to determine the value of foreign currencies, the Federal Government could control the money supply and restrain inflation; with the right to regulate trade with foreign nations and among the states, the Federal Government became able to control the economic situation of the country. The stipulation about commerce regulation won strongest support from big cities and centers of manufacturing industry and commerce, such as New York, Philadelphia and Boston, because they knew that the regulation of the central government would be quite helpful for the sale of their products. Alexander Hamilton, one of the most active representatives in the Constitutional Convention, pointed out that free trade in the whole nation was very profitable for any kind of business. For example, when the local market was weakened, the markets in other states and areas of the country would support the sale of the producers, thus their business could keep developing. Hamilton concluded that any farsighted businessman would see the power of the unity of the country, that they would find the unity of the whole nation would be much better than the separation of the thirteen states.
Power to Declare War
Certain military powers granted to the Federal Government involve declaring war, raising and supporting armies, regulating and maintaining navies, and calling forth the militia. In Article I, Section 8, the Constitution stipulates, “The Congress shall have power to declare war, grant letters of marque and reprisal, and make rules concerning captures on land and water; to raise and support armies, …to provide and maintain a Navy; to make rules for the government and regulation of the land and naval forces; to provide for calling forth the militia to execute the laws of the Union, suppress insurrections and repel invasions; to provide for organizing, arming, and disciplining the militia….” With these powers, the Federal Government can not only protect the land and provide guarantee for the development of the country, but also create conditions to invade other countries on the grounds that it has the power to declare war, grant letters of marque and reprisal. The framers of the U.S. Constitution regarded the military power of the Federal Government as a tool to protect the domestic interests of their country from foreign invasion. John Jay, one of the three writers of “The Federalist Papers” and the first Chief Justice of the Supreme Court, even said that when a country wanted to gain something, it would engage itself in a war. Most representatives in the Constitutional Convention had realized that when the United States broke up, it would easily become a sacrifice to its neighboring and enemy states. They saw that other countries still threatened the security of the United States. The Great Britain was unwilling to secede from America and kept military bases in the Northwest boundary of the United States. At the same time, France blockaded some important river mouths so that it could monopolize the market, and Spain also tried to blockade the Mississippi River. The European powers did not want the United States to develop into a powerful nation, or to share their market, neither in the United States itself nor abroad. The framers of the U.S. Constitution fully realized that a strong navy and land force could become not only a tool to protect the interests of the United States, but also a tool to force other countries to open their markets. A strong army would definitely make the European countries respect their country.
Apart from the foreign troubles, the leaders of the United States had also seen the serious influences of clashes between different classes. They believed that in time of trouble, a strong army would be decisive. Of course, they would not ignore the danger of such domestic rebellions as Shays’ Rebellion. When talking about the danger of rebellions, James Madison said, “I have noticed a kind of unhappy people scattered in some states. They degrade under the human standard when the political situation remains steady; but when the society is in chaos, they would provide their fellow people with a great force.” (Smith 1986:194) So the rulers of the country needed a strong army to suppress the revolt of these “unhappy people”, and to maintain a stable domestic political situation.
The U.S. Constitution grants so many specific powers to the Federal Government, at the same time, lists a rather large number of things that the Federal Government is not allowed to do. Evidently, the framers were afraid that too strong a central government would easily bring about autocracy. In order to restrict the authority of the central government, the framers wanted to make it clear in the Constitution that certain powers were emphatically denied to the Federal Government. Restrictions of the powers of the Federal Government are listed below:
When the Constitution granted the Federal Government certain powers, the framers also considered reducing the power of the state governments, so that the central government could force the states to take unified steps if necessary. In Article I, Section 10, the Constitution stipulates, “No State shall enter into any treaty, alliance, or confederation; grant letters of marque and reprisal; coin money; emit bills of credit; make any thing but gold and silver coin a tender in payment or debts….No State shall, without the consent of the Congress, lay any imposts or duties on imports or exports, except what may be absolutely necessary for executing its inspection laws…. No State shall, without the consent of Congress, lay any duty of tonnage, keep troops, or ships of war in time of peace, enter into any agreement or compact with another State, or with a foreign power, or engage in war….” According to this clause, the states were deprived of the power to issue currency, to levy taxes freely, to keep troops in time of peace, to make a compact of agreement with another state of the U.S., or with a foreign state, and to engage in war. With the prohibition of the states from issuing currency, the United States could now avoid inflation and depreciation of currency caused by unregulated money supply. With the restriction of the states from levying taxes freely, the obstacles of the commerce were removed. Now the state congresses did not have the power to collect heavy taxes freely on goods from other states any more, thus the commerce in the United States began to thrive. With the prohibition of the states from keeping troops in time of peace and engaging in war, the territorial integrity of the United States could be guarded, and the Union could be maintained. As the power of the state governments was limited, people’s confidence in their central government was greatly strengthened. The society of the United States was being led onto a right path of development.
Although the power is restricted, the states still possess some necessary powers and exercise important functions in the United States. The Tenth Amendment of the U.S. Constitution indicates that the states possess those powers that are not given to the Federal Government or prohibited to the states. The Tenth Amendment stipulates, “The powers not delegated to the United States by the Constitution, nor prohibited by it to the States, are reserved to the States respectively, or to the people.” State powers then are called reserved powers. Reserved powers are interpreted as the right to establish schools and supervise education, regulate intrastate commerce, conduct elections, establish local government units, and borrow money. In addition, a broad and generally undefined “police power” enables the states to take action to protect and promote the health, safety, morals, and general welfare of their inhabitants. All of these are functions that directly affect Americans every day and in every part of their lives.
There are still some powers that both the national and state governments can exercise. They are called concurrent powers, which include the power to tax and borrow money, to take property for public purposes, to enact bankruptcy laws, and to establish laws and courts.
Thus in the course of the U.S. Constitutional Legislation, a federal system was created by separating power between two levels of government, state and national. According to the Constitution, the Federal Government was granted certain powers, the states were given certain powers and there were certain powers that they shared. In order to overcome a series of domestic crises and keep a stable political situation, a strong central government was created. This central government was granted certain important powers while the power of the state governments was limited.
The U.S. Constitution has remained in force because its framers successfully separated and balanced governmental powers to safeguard the interests of majority rule and minority rights, of liberty and equality, and of the central and state governments. For over two centuries it has provided the basis for development of the United States and a guarantee for the stability of the country.
About the Author | http://www.earlyamerica.com/review/2009_summer_fall/constitution-separates-power.html | 13 |
24 | Aborigines were the first inhabitants of Australia, migrating there at least 40,000 years ago. While Asian explorers had landed in northern Australia well before AD 1500, it was not until the 17th century that the first Europeans from Holland managed to sail to Australia. Of the several Dutch expeditions into the southern oceans, the most successful was that of Abel Tasman, who in 1642 discovered an island now known as Tasmania. However, the Dutch did not formally occupy Australia, finding little there of value for European trade, opening the way for the later arrival of the English. Starting in 1765, Captain James Cook led a series of expeditions to Australia and he subsequently supported settlement in Australia. Curiously it was a rising crime rate in England that led to the occupation of Australia. After the American Revolution ended in 1783, Britain moved quickly to establish its first settlement in Australia as a place to send its convicts, since it could no longer ship British convicts to America. In 1786, the British government announced that it would establish a penal settlement at Botany bay in Australia, and in 1788, retired Royal Navy captain Arthur Phillip arrived at Botany Bay with more than 1,450 passengers. This included 736 convicts, 211 marines, 20 civil officers, and 443 seamen. Subsequently, he moved the fleet north to Port Jackson, an excellent natural harbor, and began the first permanent settlement on January 26, 1788 (now known as Australia Day). This settlement was subsequently named Sydney in honor of Lord Sydney, Britain's home security who was responsible for the colony. Food supply was a major problem in the early settlement days, and needed food supplies came mainly from Norfolk Island, which Phillip had occupied in February 1788, an island that later served as a jail for convicts who committed new crimes while serving their sentence in Australia. (In fact, the later Warden of Norfolk Prison, Captain Alexander Maconoche is legendary for having instituted a then controversial practice of releasing convicts early for good behavior as a means of managing an unruly population of convicts. This innovation resulted in Maconochie being dubbed "the father of parole," and also led to his dismissal as warden.) The New South Wales Corps replaced the Royal Marines in 1792. They were given grants of land and became excellent farmers. Through controlling the price of rum, used as an internal means of exchange, they posed a threat to the governors. When Captain William Bligh (whose crew aboard the Bounty had mutinied in the Pacific) became governor in 1806 and threatened the corps with the loss of their monopoly, they responded with a so-called Rum Rebellion. Bligh was arrested and sent back to London, giving the leaders of the corps a victory. Coincidentally, one of the corps leaders, John Macarthur, found a solution to the colony's lack of valuable exports by interesting British manufacturers in Australian wool. After 1810, the wool of the Australian merino sheep became the basis for a major economic activity. The New South Wales Corps was sent home by the next governor to be followed by more free settlers claiming farmland on which convicts could serve as laborers. As convicts completed their sentences, the convicts agitated for land and opportunities, and were known as emancipists, opposed by the free settlers, who were known as exclusives. In 1825 the island settlement of Van Diemen's Land (today's Tasmania) became a separate colony, having been established in 1803 as a penal colony because of fear that the French would claim the island. Sheep grazing expansion caused a growth of land claims by squatters and resulted in the colonization of the Port Phillip district that became the colony of Victoria in 1850, with its capital at Melbourne. Another colony to the north, Queensland, was settled by graziers and separated from New South Wales in 1859. Other settlements of European people were subsequently established elsewhere, resulting in the creation of six independent British colonies: New South Wales, Victoria, Queensland, Western Australia, South Australia and Tasmania. In 1850, the sending of convicts to New South Wales was abolished. It was abolished to Van Diemen's land in 1852. (More than 150,000 had been send to the two colonies.) Owed to a movement toward free trade, which nullified the need for colonies, from 1842 to 1850, Australian colonies received constitutions and were given legislative councils (preventing a war of independence which might have unified the Australian colonies). Australia had its own gold rush in the 1850s, which resulted in an influx of Chinese immigrants attracted by gold, a movement that was opposed by the white settlers in their exclusion of all but European settlers. This became known as a "White Australia" policy, a policy that endured up until recently in Australia. Seemingly, this policy also applied to the Aborigines who as the frontier pushed inland, were often poisoned, hunted, abused, and exploited by the settlers. After a constitutional convention in sydney from 1897 to 1898, the six colonies approved and became a federation. The Commonwealth of Australia was subsequently approved by the British Parliament in 1900 and came into existence on January 1, 1901 (although since then, the Northern Territory and the Australian Capital Territory have been granted self-government). The federal constitution combined British and American practices, with a parliamentary government, but with two houses - the popularly elected House of Representatives and Senate representing the former colonies (which were now states). However, the Balkanization of Australia into separate unrelated states continued until WWI when the nation unified, sending 330,000 volunteers to fight with the allies. WWII brought a greater alliance with the United States. This alliance has endured until today through Australian participation fighting along-side the Western alliance in the Korean War and fighting in the Vietnam war as an ally of the United States. The White Australia policy was discarded during the 1950s through 1970s. Under the Colombo Plan, Asians were admitted to Australian universities in the 1950s. In 1967, a national referendum granted citizenship to Aborigines, and in the 1970s, the entry of immigrants began to be based on criteria other than race. Australia remains part of the British commonwealth, after a national referendum failed to win a majority vote to change Australia's form of government to a republic. The Commonwealth of Australia has nine separate parliaments or legislatures, most of which have lower and upper houses. There are also several hundred local government authorities, known as councils or shires. The national or Commonwealth Government is responsible for defense, foreign affairs, customs, income tax, post and telegraphs. The State or Territory Governments have primary responsibility for health, education and criminal justice, although the Commonwealth Government is also influential in these areas. There exists a level of tension between the governments at the State or Territory level and the Government of the Commonwealth. This tension is almost exclusively concerned with the issue of the allocation of monies raised from income tax and the appropriate distribution of power. Since the 1970s, there has been a noticeable shift of power toward the Commonwealth Government.
"Australia." Microsoft Encarta Online Encyclopedia 2002, http://encarta.msn.com (23 June, 2002)
Crime is generally defined in Australia as any conduct which is prohibited by law and which may result in punishment. Crimes can be classified as either felony, misdemeanor or minor offenses, but more commonly they are classified as indictable or not indictable offenses. Indictable offenses are those which are heard by the superior courts and may require a jury, whereas non-indictable offenses, which comprise the vast majority of court cases, are heard in magistrates courts, where no juries are employed. While there are some classification differences among the various jurisdictions, in all jurisdictions indictable offenses generally include homicide, robbery, serious sexual and non-sexual assault, fraud, burglary and serious theft. Homicide includes murder, manslaughter (not by driving) and infanticide. Assault is defined as the direct infliction of force, injury or violence upon a person, including attempts or threats. Sexual assault is a physical assault of a sexual nature, directed toward another person where the person does not give consent; or gives consent as a result of intimidation or fraud; or is legally deemed incapable of giving consent because of youth or temporary/ permanent incapacity. Sexual assault includes: rape, sodomy, incest, and other offenses. Rape is defined as unlawful sexual intercourse with another person by force or without the consent of the other person. Robbery is defined as the unlawful removing or taking of property or attempted removal or taking of property without consent by force or threat of force immediately before or after the event. Unlawful entry with intent (UEWI) is defined as the unlawful entry of a structure with the intent to commit an offense. UEWI offenses include burglary, break and enter and some stealing. Motor vehicle theft is the taking of a motor vehicle unlawfully or without permission. "Other theft" or stealing is defined as the taking of another person's property with the intention of permanently depriving the owner of property illegally and without permission, but without force, threat of force, use of coercive measures, deceit or having gained unlawful entry to any structure even if the intent was to commit theft. In some jurisdictions, such as South Australia, there is a group of "minor indictable" offenses which can be heard in the superior or lower courts, according to the wish of the accused. Criminal justice statistics are based on a classification scheme which divides crimes into offenses against the person, property offenses and "other." The minimum age of criminal responsibility and the upper age limit for hearings in juvenile courts varies among Australian States and Territories. The minimum age of criminal responsibility in juvenile courts is 7, while the minimum age to be tried in an adult court is 16. In all jurisdictions, any child above the age of criminal responsibility who is charged with homicide can be tried in an adult court. In some jurisdictions, juveniles may have their offenses tried in adult courts for offenses such as rape and treason. Drug offenses constitute a major focus of all Australian criminal justice systems. The possession, use, sale, distribution, importation, manufacturing or trafficking of a wide range of drugs is illegal in all Australian jurisdictions. Illegal drugs include: marijuana (cannabis), heroin, designer drugs (ice, ecstasy), amphetamines (speed, LSD) and cocaine (including crack). While the possession or use of any of these drugs is illegal, in some jurisdictions, notably South Australia and the Australian Capital Territory, marijuana has been partially decriminalized. Its possession or use may result in the imposition of a relatively small fine without the need to appear in court. Tasmania is one of the world's major suppliers of licit opiate products; government maintains strict controls over areas of opium poppy cultivation and output of poppy straw concentrate
INCIDENCE OF CRIME
The following data has been compiled by the Australian Institute of Criminology from information contained in the annual reports of Australian police forces for the year 2000. In year 2000, there were 346 homicides reported to the police, for a rate of 2.0 per 100,000 population. The percentage of homicides committed with a firearm was 17%. Attempts are not included. In 2000 there were 141,124 assaults reported by the police at a rate of 737 per 100,000 population. There were 15,630 victims of sexual assault recorded by the police in Australia in 2000, about 82 victims per 100,000 population. Police recorded 23,314 victims of robbery during 2000, with 122 per 100,000 population. In 2000, there were 436,865 incidents of unlawful entry with intent to commit an offense, a rate of 2281 victims per 100,000 population. Police recorded 139,094 motor vehicles stolen in 2000, with 726 victims per 100,000 population. A total of 674,813 victims of "other theft" was recorded by the police in 2000, with 3,523 victims per 100,000 population in Australia. A victim survey of households was conducted by the the Australian Bureau of Statistics in 1998 for some of these crimes. From this survey it was estimated that 4.3% of households were victimized by assault, .4% by sexual assault, .5% by robbery, 5.0% by break-in, and 1.7% by motor vehicle theft. If these were converted to rates per 100,000, the rates would be 4300 for assault, 400 for sexual assault, 500 for robbery, 5000 for break-in, and 1700 for motor vehicle theft, in all cases higher than the incidence recorded by police.
Trend analysis has been done for years 1973/4 to 1991/2. Trend data using official statistics indicate apparently ever-increasing levels of crime. By contrast, national crime victimization surveys show much more stable trends in crime. Total property crimes reported to police increased from 385,453 to 1,168.423 in 1990/1 (an increase of 203%) before falling in 1991/2 to 1,024,569. Total violent offenses rose from a mere 7,056 to 36,909 in 1991/2, an increase of 423%. Expressed as an annual rate per 100,000 population, property offending went from 2834.4 crimes reported per 100,000 to 6563.8 in this period; violence increased from 51.9 to a startling 213.4. Adjusting for population change, then, these increases are of 132% and 311% respectively. Of particular concern are trends in reported sexual assaults (rape rates up 426%) and other serious assaults (up 452%). In addition, reported drug offenses were up 612% between 1974/5 and 1991/2. However, the national statistics for homicide remain remarkably steady within a range between 1.62 per 100,000 and 2.40 per 100,000. By contrast, National Crime Victims Surveys done for years 1974/5 through 1991/2 show less of a change. These figures suggest increases in these eighteen years of 51% for break, enter and steal, and no increase at all for motor vehicle theft. For robbery, the survey suggests a 33% increase, a decrease of 9% for assault, and no change in incidence for sexual assault. The discrepancy between police and survey data is partly explained by increased reporting which itself is explained by an increase in the numbers of police. The number of police rose from 178 per 100,000 population in 1973/4 to 244 in 1991/2.
INTERNATIONAL CRIME RATE COMPARISONS
In a comparison of survey victimization from the International Crime Victim Survey, it appears that Australia has a high rate of crime. Car theft is virtually double the average rate for 21 countries, and in most of the other offenses including burglary and violence, Australia risks are at least fifty percent higher than the average. These offenses include car theft, theft from car, car damage, burglary, theft of personal property, robbery, sexual assault, and other assault. Only bike theft ranks low in comparison to the average of other counties.
The Commonwealth of Australia is a federalist government composed of a national government and six State governments. If Territories are included, there in effect nine different criminal justice systems in Australia - six state, two territory, and one federal. The eight States and Territories have powers to enact their own criminal law, while the Commonwealth has powers to enact laws. Criminal law is administered principally through the federal, State and Territory police. There is no independent federal corrective service. State or Territory agencies provide corrective services for federal offenders. The government of the Commonwealth is responsible for the enforcement of its own laws. The most frequently prosecuted Commonwealth offenses are those related to the importation of drugs and the violation of social security laws. Offenses against a person or against property occurring in Commonwealth facilities are also regarded as offenses against the Commonwealth. The States are primarily responsible for the development of criminal law. Queensland, Western Australia, and Tasmania are described as "code" States because they have enacted criminal codes which define the limits of the criminal law. The remaining three States, New South Wales, Victoria, and South Australia are regarded as "common law" States because they have not attempted codification. In practice, however, there is little difference in the elements of the criminal law between the "code" and "common law" States. Local governments can pass legislation, known as bylaws. These generally include social nuisance offenses as well as traffic and parking rules. Local government officials or the State and Territory police generally enforce the local government bylaws. The maximum penalty that can be imposed for conviction of a bylaw offense is a monetary fine. However, non-payment of fines can result in imprisonment. The structure of the Australian legal system is derived from, and still closely follows, that of the United Kingdom. In addition to parliament-made law, there is the "common law" inherited from the English courts which has since been developed and refined by Australian courts. It should be noted, however, that since 1963 Australian courts have ceased to regard English decisions as superior or even equal in authority to those made by Australian courts. The legal system is adversarial in nature and places a high value on the presumption of innocence. Due to the federalist system of government, there are nine separate legal systems in operation. Although there are some significant differences between these systems, they are essentially similar in structure and operation.
Australia has one police force for each of the six States, the Australian Capital Territory, and the Northern Territory. There is also a Commonwealth agency known as the Australian Federal Police (APF) which provides police services for the Australian Capital Territory and is also involved in preventing, detecting and investigating crimes committed against the Commonwealth, including drug offenses, money laundering, organized crime, and fraud. The APF was brought into existence by the Australian Federal Police Act of 1979. However, because of findings of several Royal Commissions in the late 1970s and early 1980s that revealed the extent of organized crime in Australia, the Commonwealth Government in July 1984 established the National Crime Authority (NCA). Legislation was passed in each State, the Northern Territory, and the Australian Capital Territory, to support the work of the NCA in those jurisdictions. The NCA is the only law enforcement agency in Australia not bound by jurisdictional or territorial boundaries. Its single mission is to combat organized criminal activity. Thus, there are now ten separate police forces for the nation, including the NCA and the AFP, police for the two territories, as well as police for the six states (New South Wales, Victoria, Queensland, South Australia, Western Australia, and Tasmania). There are, however, a large number of other agencies which have specific law enforcement functions, including health inspectors, tax officials, and immigration and customs officers. All Australian police forces have a hierarchical organization. In the larger police forces, the chief officer is known as the Commissioner, except in Victoria, where he or she is known as the Chief Commissioner. The larger forces also have one or more Deputy Commissioners and a number of Assistant Commissioners. Below these ranks are Chief Superintendents, Superintendents, Chief Inspectors and Inspectors. Officers achieving the rank of Inspector or above are known as commissioned officers. The remaining ranks consist of Senior Sergeants, Sergeants, Senior Constables and Constables. In the State and Territory police forces, the administration is divided into geographical districts, which are themselves divided into divisions and subdistricts. There is also a movement towards increasing the autonomy of regional police commanders in many Australian police forces. The Commissioner of Police is directly accountable to a Minister, but the Minister is usually not permitted to influence the operation and decisions of police commanders. An Australian Police Ministers Council (APMC) meets at least once a year and is supported by the Commissioners in this context as the Senior Officers Group (SOG). The APMC and SOG structures have attempted to create a higher level of cooperation and uniformity of police practices throughout Australia. Australian police forces are not closely associated with the military forces. Australian military forces have no responsibility for the maintenance of civil order. However, on very rare occasions the military forces have been required to provide assistance to the police. In the event of a serious natural disaster, such as a flood or bush fire, the military forces are asked to assist the police and other civilian authorities. Australian police recruits are required to have completed their secondary education, although it is not always essential to have been awarded a qualification known as Higher School Certificate. A university degree is not generally required of police in Australia except for specialist posts. University training is encouraged for all recruits to the Australian Federal Police and increasingly in other police forces. Recruits must undergo medical and psychological tests and are evaluated on their overall suitability, competence, physical fitness and character. Recruit training is a combination of classroom and field-based experience which takes approximately 18 months to complete. A portion of this training takes place in a police academy and the remainder is conducted on the job. All police officers may use "appropriate" force when encountering violent persons. "Appropriate" is defined by the level of force required to overcome and apprehend the person(s). Police officers may use "lethal" force on a person if they believe their life or the life of another person is in danger. "Lethal" is defined as the level of force that might result in the person's death. All police officers carry handguns and handcuffs. They rarely carry batons; these are usually kept in police cars. In general, a police officer may stop and apprehend any person who appears to be committing, or is about to commit, an offense. The law provides that law enforcement officials may arrest persons without a warrant if there are reasonable grounds to believe a person has committed an offense. The vast majority of arrests are made without a warrant although there are jurisdictional differences concerning prerequisites to arrest. Law enforcement officials can seek an arrest warrant from a magistrate when a suspect cannot be located or fails to appear. Once individuals are arrested, they must be informed immediately of the grounds of arrest and given a "criminal caution," that is, informed of their rights. Police are generally required to obtain a search warrant from a judge or a magistrate before they enter premises and seize property. However, illegal drugs and weapons can be seized without a warrant. Whereas the issue of obtaining confessions from suspected offenders has been a controversial subject in the past, the controversy has diminished with the onset of video. Virtually all interviews with persons suspected of serious offenses are videotaped. Complaints against the police are investigated by different authorities in different jurisdictions.
Once taken into custody a detainee must be brought before a magistrate for a bail hearing at the next sitting of the court. Persons charged with criminal offenses generally are released on bail except when charged with an offense carrying a penalty of 12 months imprisonment or more, or the possibility of violating bail conditions is judged to be high. Attorneys and families are granted prompt access to detainees. Detainees held without bail pending trial generally are segregated from the other elements of the prison population. The law prohibits all such practices; however, there were occasional reports that police mistreated suspects in custody. Some indigenous groups charge that police harassment of indigenous people is pervasive and that racial discrimination among police and prison custodians persists. Amnesty International reported several incidents that involved such abuses. State and territorial police forces have internal affairs units that investigate allegations of abuse and report to a civilian ombudsman. The federal Government oversees six immigration detention facilities located in the country and several offshore facilities in the Australian territory of Christmas Island and in the countries of Nauru and Papua New Guinea. These facilities were established to detain individuals who attempt to enter the country unlawfully, pending determination of their applications for refugee status. Hunger strikes and protests have occurred at immigration detention facilities over allegedly poor sanitary conditions, inadequate access to telephones, and limited recreational opportunities.
All accused persons have the right to defend themselves in court but in serious cases most prefer to be represented by a legal practitioner. A recent decision by the High Court of Australia held that in all serious matters if the accused does not have access to legal advice, the case must be adjourned. In any trial, both the prosecution and the defense have the right to question and cross-examine witnesses. In New South Wales, the accused person also has the right to make an unsworn statement, thus avoiding being cross- examined by the prosecution. This practice has been abolished in all other Australian jurisdictions. A national system for the provision of free legal aid to accused persons was established in 1993 and subsequently some of the States have established legal service commissions which monitor and oversee the provision of this service. Eligibility to receive legal aid depends on the financial means of the individual and the merit of the case being defended. Legal aid is provided either through the salaried staff of a Legal Aid Commission or by assignment to private legal practitioners. Also, an extensive number of Aboriginal legal services throughout Australia receive separate funding from national or state legal services. Arrested persons are brought to a police station where charges are brought against them. Before being charged, the arrested person is usually searched. The police are empowered to use force if the search is resisted. In all serious cases, arrested persons are photographed and finger printed before being charged. If no charges are brought, the accused person is released. In most jurisdictions the police allow arrested persons to make a telephone call to a legal adviser, friend or relative. After the charging procedures are completed, the accused is either released on bail or held in custody. The role of the police in pre-trial decision- making includes performing the necessary investigation and detection work, filing charges and, except for the Australian Capital Territory, prosecuting the case in court. In some cases and in all Federal matters, the Director of Public Prosecutions is involved in determining what charges will be brought. If the Director decides that the case should be heard on indictment (heard in a superior court), a committal or preliminary hearing in a lower court is usually held in order to discover whether there is sufficient evidence to proceed with the trial. If the accused pleads guilty to a charge, the judge or magistrate may immediately impose a sentence without setting the case for trial. Thus, guilty pleas help to speed case flow and reduce case overload in the court system. If the accused pleads not guilty, the evidence of the prosecution and defense are heard in an adversarial manner in court. Cases involving serious charges are heard in a higher court with a 12-member jury. However, in some cases, the accused person has the right to waive a jury trial. Police will often conduct the prosecution for lower court cases, but not for those in the higher courts. In some jurisdictions there are alternatives to formal charging and court appearance procedures. These alternatives involve the use of community justice centers or dispute resolution centers to provide for the resolution of disputes between conflicting individuals. The proceedings in these centers are relatively informal and the hearings are less expensive than court procedures. In addition, most States have small claims tribunals or courts that allow for minor matters to be settled without involving the police or lawyers. Although plea bargaining is not officially permitted in any jurisdiction, some commentators have suggested there exists a form of charge bargaining, an arrangement by which an individual chooses to plead guilty to one or two particular charges with the understanding that other charges will be dropped. Pre-trial incarceration is usually referred to as "remanded in custody." In all jurisdictions there is a strong presumption in favor of granting bail. Bail can be granted either by police or by the courts. There are three main grounds for the denial of bail and remanding an individual in custody: 1) to prevent the offense from being continued or repeated; 2) to ensure that the offender does not abscond and appears in court as required; and 3) to ensure that the accused person does not interfere with the process of justice (for instance, by contacting jurors or witnesses). Generally, suspects brought on very serious charges, such as homicide, are remanded in custody for a substantial period of time while awaiting trial. Approximately 13% of all Australian prisoners are awaiting trial with the period of stay on remand varying between a few days to more than one year in a small number of cases. Australia has a hierarchical system of courts with the High Court of Australia operating at the top. The High Court of Australia is the final court of appeal for all other courts. It is also the court which has sole responsibility for interpreting the Australian Constitution. Within each State and Territory there is a Supreme Court and, in the larger jurisdictions, an intermediate court below it, known as the District Court, District and Criminal Court, or County Court. There is no intermediate court in Tasmania or in the two territories. Below the intermediate courts there are Magistrates Courts at which virtually all civil and criminal proceedings commence. Approximately 95% of criminal cases are resolved at the Magistrates Courts level. Cases passing through the courts generally share the following common elements: lodgment - the initiation of the matter with the court; pre-trial discussion and mediation between parties; trial; and court decision - judgment or verdict followed by sentencing. Cases initiated in Magistrates' Courts account for 98.1% of all lodgments in the criminal courts. The majority of criminal hearings (96%) take place in Magistrate's Court. The duration between the lodgment of a matter with the court and its finalization is referred to as "timeliness." Generally, lower courts complete a greater proportion of their workload more quickly because the disputes and prosecutions heard are less complex than those in higher courts, and cases are of a routine and minor nature. Committals are the first stage of hearing indictable offenses in the criminal justice system. A magistrate assesses the sufficiency of evidence presented against the defendant and decides whether to commit the matter for trial in a superior court. Defendants are often held in custody pending a committal hearing or trial, if ordered. Defendants' cases are finalized at the higher court level in one of the following two ways: adjudicated - determined whether or not guilty of the charges based on the judge's decision; and non-adjudicated - a method of determining the completion of a case thereby making it effectively inactive. Overall, 77% of the defendants whose cases are heard by a higher court are found guilty of an offense. Parallel to the Supreme Courts in the States and Territories is a Federal Court that is primarily concerned with the enforcement of Commonwealth Law, such as that related to trade practices, but that also hears appeals from the Supreme Courts of the Territories. Each State and Territory has a children's or juvenile court. Children's courts are invariably closed to the public and the press in order to protect the anonymity of the accused. The High Court of Australia has seven judges. Since its creation in 1901 there have been 37 appointments to the High Court. Except for one, all appointments have been male. A Chief Justice heads the Supreme Courts in each State and Territory. The actual number of judges varies according to the size of the state. In some jurisdictions, lay persons are appointed as Justices of the Peace. Although, in the past, these lay persons were able to convene courts and sentence offenders, this power has largely been removed in recent years. All of the persons appointed to the High Court of Australia have been distinguished members of the legal profession, but a significant minority of them have also had political experience or have been judges in a Supreme or Federal Court. The appointment of judges at each government level is the responsibility of the relevant government. In the case of the High Court and the Federal Court, formal judicial appointments are made by the Governor General. The Governor of the State formally appoints judges to the Supreme Courts. The identification and recommendation of persons to be appointed as judges in each jurisdiction is primarily the responsibility of the corresponding Attorney-General. In cases where a person either pleads guilty or is found guilty, the judge or magistrate responsible for the case determines the sentence. In complex or serious cases there is frequently an adjournment to allow the judicial officer to consider the appropriate sentence and to hear argument from the prosecution and defense in relation to sentence. Victim impact statements may be submitted in South Australia. In other jurisdictions pre-sentence reports are prepared to assist the judicial officer, usually by probation officers. Pre-sentence reports may also include a psychiatric opinion. There are a variety of sentencing options available at each court level; fine, good behavior bond, probation order, suspended sentence, community supervision, community custody, home detention, periodic detention, and imprisonment. All jurisdictions permit the following penalties to be imposed: fines, probation orders (supervision or recognizance orders), community service orders or imprisonment. Some jurisdictions provide for the imposition of home detention. Home detention is usually employed as a post-prison order rather than as an order imposed directly by the sentencing court. Capital punishment and corporal punishment have been abolished in all Australian jurisdictions. The last execution took place in 1967.
Prisons are the responsibility of states or territories. There are no federal penitentiaries or local jails. There are approximately 80 prisons throughout Australia. This number is an approximation because several large institutions are subdivided into administratively independent units. Although most prisons are designated as either high, medium, or low security facilities, prisoners at varying levels of security classification occupy most. In June, 2000, the total number of prisoners in Australia was 21,714, 94% of which were male. The rate for imprisonment in Australia was 148 per 100,000 population. According to a report by the Australian Bureau of Statistics, as of June 30, 2000, aboriginal adults represent 1.6 percent of the adult population but constituted approximately 19 percent of the total prison population, or approximately 14 times the nonindigenous rate of incarceration. The main offenses for which male offenders were sentenced included break and enter, robbery, and sex offenses. For female offenders, the main offenses included drug offenses, fraud, and robbery. Male prisoners sentenced for the violent offenses of homicide, assault, sex offenses, and robbery accounted for almost half of all sentenced male prisoners in 2000, while for females only one-third of sentenced prisoners were incarcerated for violent offenses. Generally, the training period for prison officers varies from 3 to 12 months and always involves a combination of classroom study and on-the-job training. Prison officers are required to undertake further study and pass examinations in order to be considered for promotion in the prison system. In Western Australia, persons who are appointed as superintendents or officers in charge of institutions must obtain some form of tertiary qualification. Until recently all convicted Australian prisoners were entitled to earn remissions or time off for good behavior. This approach has since been changed in New South Wales and Victoria as a result of support for an approach known as "truth in sentencing." This change is said to have resulted in a significant increase in the number of inmates in prisons, particularly in New South Wales. All States and Territories in Australia have provisions for parole and virtually all persons serving sentences of one year or more are released under a parole system. Most of the time, the number of persons serving parole is approximately two thirds of the total number of persons in prison. In addition, for every person in prison, there are approximately four persons serving other forms of non-custodial sentences such as probation or community service. All prisons have provisions for work, education and training, recreation and support. Inmates classified as requiring low security are able to obtain weekend leave. Other privileges are also available.
A number of large victim surveys conducted in Australia have consistently shown that most victims do not report crimes to the police. The main reasons that victims have cited for not reporting are that they consider the offense to be trivial or they believe the police either could not or would not do anything about the crime report. Such surveys have also found that victims are more likely to be men than women, young than old, unemployed and less well educated than the Australian norm. The most recent crime survey data for Australia come from the International Victims Survey (ICVS), which was conducted in March 2000. The most commonly mentioned personal crimes for Australia were consumer fraud (9%), assault (7%) and theft from the person (7%). About one in five persons reported being a victim of personal crime in 1999. The most common household crimes were motor vehicle damage (9%) and theft from a motor vehicle (6%). Just over 4% of households reported being a victim of a completed burglary (break-in). About 10% of households own a firearm in Australia (compared to 33% in the United States). About 66% of murders and 41% of robberies occurring in the United States in 2000 involved the use of a firearm, compared to 20% and 6% of murders and robberies, respectively, in Australia. There are a number of agencies that provide crime victim assistance in all Australian jurisdictions. These agencies include rape crisis centers, women's shelters, safe houses and voluntary organizations such as Victims of Crime Assistance League (VOCAL) and Victims of Crime Services (VOCS). Crime victims do not play an active role in the prosecution or sentencing of an offender in any Australian jurisdiction. South Australia has enacted a Victims of Crime Charter, based on the United Nations Charter. This charter provides for victim impact statements to be prepared and used in certain cases and for victims to be consulted at the various stages in the criminal iustice process.
Violence against women is a problem. Social analysts and commentators estimate that domestic violence may affect as many as one family in three or four, but there is no consensus on the extent of the problem. While it is understood that domestic violence is particularly prevalent in certain Aboriginal communities, only the states of Western Australia and Queensland have undertaken comprehensive studies into domestic violence in the Aboriginal community. It is agreed widely that responses to the problem have been ineffectual. The Government recognizes that domestic violence and economic discrimination are serious problems and the statutorily independent Sex Discrimination Commissioner actively addresses these and other areas of discrimination. A 1996 Australian Bureau of Statistics (ABS) study (the latest year for which statistics are available) found that 2.6 percent of 6,333 women surveyed who were married or in a common-law relationship had experienced an incident of violence by their partner in the previous 12-month period. Almost one in four women who have been married or in a common-law relationship have experienced violence by a partner at some time during the relationship, according to the ABS study. Prostitution is legal or decriminalized in many areas of the states and territories. In some locations, state and local governments inspect brothels to prevent mistreatment of the workers and to assure compliance with health regulations. There were 14,074 victims of sexual assault recorded by the police in 1999 (the latest figures publicly available; they do not distinguish by gender), a decrease of 1.8 percent from 1998. This amounts to approximately 74 victims of sexual assault per 100,000 persons. Spousal rape is illegal under the state criminal codes. Though prostitution is legal or decriminalized and occurs throughout the country, child sex tourism is prohibited within the country and overseas. In the past, the occurrence of female genital mutilation (FGM), which is criticized widely by international health experts as damaging to both physical and psychological health, was insignificant. However, in the last few years, small numbers of girls from immigrant communities in which FGM is practiced have been mutilated. The Government has implemented a national educational program on FGM, which is intended to combat the practice in a community health context. Trafficking in women from Asia and the former Soviet Union for the sex trade is a limited problem that the Government is taking steps to address. Sexual harassment is prohibited by the Sex Discrimination Act.
According to the Australian Institute of Criminology (AIC) report released in March, indigenous people were imprisoned nationally at 14 times the rate of nonindigenous people in 1999. The indigenous incarceration rate was 295 per 10,000 persons, while the nonindigenous incarceration rate was 18 per 10,000 persons. The AIC reports that the incarceration rate among indigenous youth was 18.5 times that of the nonindigenous youth population in 1999. Over 45 percent of Aboriginal men between the ages of 20 and 30 years have been arrested at some time in their lives. Aboriginal juveniles accounted for 42 percent of those between the ages of 10 to 17 in juvenile corrective institutions during 2000, according to the AIC. Human rights observers claim that socioeconomic conditions give rise to the common precursors of indigenous crime, for example, unemployment, homelessness, and boredom. Controversy over state mandatory sentencing laws continued throughout the year. These laws set automatic prison terms for multiple convictions of certain crimes. Human rights groups have criticized mandatory sentencing laws, which they allege have resulted in prison terms for relatively minor crimes and indirectly target Aboriginals. In July 2000, the U.N. Human Rights Commission issued an assessment of the country's human rights record that was highly critical of mandatory sentencing. The federal Government decided not to interfere in what it considered to be the states' prerogative, arguing that the laws were passed by democratically elected governments after full political debate, making it inappropriate for the federal government to intervene. The newly-elected government of the Northern Territory repealed the territory's mandatory sentencing laws in October. Australia's Aboriginal and Torres Straits Islander Commission (ATSIC) welcomed this repeal and called upon Western Australia to follow suit. Western Australia continued to retain its mandatory sentencing laws, which provide that a person (adult or juvenile) who commits the crime of home burglary three or more times is subject to a mandatory minimum prison sentence. Indigenous groups charge that police harassment of indigenous people, including juveniles, is pervasive and that racial discrimination among police and prison custodians persists. Human rights groups have alleged a pattern of mistreatment and arbitrary arrests occurring against a backdrop of systematic discrimination.
Although Asians make up less than 5 percent of the population, they account for 40 percent of new immigrants. Public opinion surveys have indicated concern with the numbers of immigrants arriving in the country. Upon coming to power in 1996, the Government reduced annual migrant (nonrefugee) immigration by 10 percent to 74,000; subsequently, it has increased to approximately 80,000. Humanitarian immigration figures remained steady at approximately 12,000 per year from 1996 through this year. The significant increase in unauthorized boat arrivals from the Middle East during the past 3 years has heightened citizens' concern that "queue jumpers" and alien smugglers are abusing the country's refugee program. Leaders in the ethnic and immigrant communities expressed concern that increased numbers of illegal arrivals, as well as violence at migrant detention centers, contributed to a few incidents of vilification of immigrants and minorities. Following the September 11 terrorist attacks on the United States, a mosque in Brisbane was subjected to an arson attack, and cases of vilification against Muslims rose.
TRAFFICKING IN PERSONS
Legislation enacted in late 1999 targets criminal practices associated with trafficking, and other laws address smuggling of migrants. Trafficking in persons from Asia, particularly women (but also children), is a limited problem that the Government is taking steps to address. The Government's response to trafficking in persons is part of a broader effort against "people smuggling," defined as "illegally bringing non-citizens into the country." Smuggling of persons--in all its forms--is prohibited by the Migration Act, which calls for penalties of up to 20 years imprisonment. In September Parliament also enacted the Border Protection Act, which authorizes the boarding and searching of vessels in international waters, if suspected of smuggling of persons. The country is a destination for trafficked women and children. In June the Australian Institute of Criminology (AIC) issued a report entitled Organized Crime in People Smuggling and Trafficking to Australia, which observed that the incidence of trafficking appears to be low. The Department of Immigration and Multicultural and Indigenous Affairs and the Australian Federal Police (AFP) have determined that women and children from Thailand, the Philippines, Malaysia, China, Indonesia, South Korea, Vietnam, and parts of the former Soviet Union have been trafficked into the country. They are believed to be entering primarily via air with fraudulently obtained tourist or student visas, for purposes of prostitution. There also have been reports of women trafficked into the country from Afghanistan and Iraq. The high profit potential combined with factors such as the difficulty of detection, unwillingness (or inability) of witnesses to testify in investigations, apparently short stays in the country by workers in the sex trade, and previously low penalties when prosecuted have contributed to the spread of groups engaged in these activities. There have been some instances of women being forced to work as sex workers in the country by organized crime groups. There are some reports of women working in the sex industry becoming mired in debt or being physically forced to keep working, and some of these women are under pressure to accept hazardous working conditions especially if their immigration status is irregular. Some women have been subjected to what is essentially indentured sexual servitude in order to pay off a "contract debt" to their traffickers in exchange for visas, plane tickets, food, and shelter. However, the available evidence suggests that these cases are not widespread. Some women working in the sex industry were not aware prior to entering the country that this was the kind of work they would be doing. Investigations in past years by DIMIA have found women locked in safe houses with barred windows, or under 24-hour escort, with limited access to medical care or the outside world. These women have been lured either by the idea that they would be waitresses, maids, or dancers or, in some cases, coerced to come by criminal elements operating in their home countries. There are also reports of young women and children, primarily from Asia, being sold into the sex industry by impoverished families. Prostitution is legal or decriminalized in many areas of the states and territories, but health and safety standards are not well enforced and vary widely. In September 1999, the Criminal Code Amendment (Slavery and Sexual Servitude) Act came into force. The act modernizes the country's slavery laws, contains new offenses directed at slavery, sexual servitude, and deceptive recruiting, and addresses the growing and lucrative trade in persons for the purposes of sexual exploitation. The act provides for penalties of up to 25 years' imprisonment and is part of a federal, state, and territory package of legislation. No prosecutions have been brought under this federal law. Another government initiative was the 1994 Child Sex Tourism Act, which provides for the investigation and prosecution of citizens who travel overseas and engage in illegal sexual conduct with children. Under the act, there have been 11 prosecutions, resulting in 7 convictions. Another case was pending at year's end. During the year, the Customs Service increased monitoring of all travelers (men, women, and children) entering the country who it suspected were involved in the sex trade, either as employees or employers. | http://www-rohan.sdsu.edu/faculty/rwinslow/asia_pacific/australia.html | 13 |
17 | Feathered dinosaurs is a term used to describe dinosaurs, particularly maniraptoran dromaeosaurs, that were covered in plumage; either filament-like intergumentary structures with few branches, to fully developed pennaceous feathers complete with shafts and vanes. Feathered dinosaurs first came to realization after it was discovered that dinosaurs are closely related to birds. Since then, the term "feathered dinosaurs" has widened to encompass the entire concept of the dinosaur–bird relationship, including the various avian characteristics some dinosaurs possess, including a pygostyle, a posteriorly oriented pelvis, elongated arms and forelimbs and clawed hand, and clavicles fused to form a furcula. A substantial amount of evidence demonstrates that birds are the descendants of theropod dinosaurs, and that birds evolved during the Jurassic from small, feathered maniraptoran theropods closely related to dromaeosaurids and troodontids (known collectively as deinonychosaurs). Less than two dozen species of dinosaurs have been discovered with direct fossil evidence of plumage since the 1990s, with most coming from Cretaceous deposits in China, most notably Liaoning Province. Together, these fossils represent an important transition between dinosaurs and birds, which allows paleontologists to piece together the origin and evolution of birds.
Despite integumentary structures being limited to non-avian dinosaurs, particularly well-documented in maniraptoriformes, fossils do suggest that a large number of theropods were feathered, and it has even been suggested that based on phylogenetic analyses, Tyrannosaurus at one stage of its life may have been covered in down-like feathers, although there is no direct fossil evidence of this. Based on what is known of the dinosaur fossil record, paleontologists generally think that most of dinosaur evolution happened at relatively large body size (a mass greater than a few kilograms), and in animals that were entirely terrestrial. Small size (<1 kg) and arboreal habits seem to have arisen fairly late during dinosaurian evolution, and only within maniraptora.
|Part of a series on|
|Dinosaurs and birds|
Birds were originally linked with other dinosaurs back in the late 1800s, most famously by Thomas Huxley. This view remained fairly popular until the 1920s when Gerhard Heilmann's book The Origin of Birds was published in English. Heilmann argued that birds could not have descended from dinosaurs (predominantly because dinosaurs lacked clavicles, or so he thought), and he therefore favored the idea that birds originated from the so-called 'pseudosuchians': primitive archosaurs that were also thought ancestral to dinosaurs and crocodilians. This became the mainstream view until the 1970s, when a new look at the anatomical evidence (combined with new data from maniraptoran theropods) led John Ostrom to successfully resurrect the dinosaur hypothesis. Fossils of Archaeopteryx include well-preserved feathers, but it was not until the early 1990s that clearly nonavian dinosaur fossils were discovered with preserved feathers. Today there are more than twenty genera of dinosaurs with fossil feathers, nearly all of which are theropods. Most are from the Yixian Formation in China. The fossil feathers of one specimen, Shuvuuia deserti, have even tested positive for beta-keratin, the main protein in bird feathers, in immunological tests.
Shortly after the 1859 publication of Charles Darwin's The Origin of Species, the ground-breaking book which described his theory of evolution by natural selection, British biologist and evolution-defender Thomas Henry Huxley proposed that birds were descendants of dinosaurs. He compared skeletal structure of Compsognathus, a small theropod dinosaur, and the 'first bird' Archaeopteryx lithographica (both of which were found in the Upper Jurassic Bavarian limestone of Solnhofen). He showed that, apart from its hands and feathers, Archaeopteryx was quite similar to Compsognathus. In 1868 he published On the Animals which are most nearly intermediate between Birds and Reptiles, making the case. The leading dinosaur expert of the time, Richard Owen, disagreed, claiming Archaeopteryx as the first bird outside dinosaur lineage. For the next century, claims that birds were dinosaur descendants faded, while more popular bird-ancestry hypotheses including that of a possible 'crocodylomorph' and 'thecodont' ancestor gained ground.
Since the discovery of such theropods as Microraptor and Epidendrosaurus, paleontologists and scientists in general now have small forms exhibiting some features suggestive of a tree-climbing (or scansorial) way of life. However, the idea that dinosaurs might have climbed trees goes back a long way, and well pre-dates the dinosaur renaissance of the 1960s and 70s.
The idea of scansoriality in non-avian dinosaurs has been considered a 'fringe' idea, and it's partly for this reason that, prior to 2000, nobody had attempted any sort of review on the thoughts that had been published about the subject. The oldest reference to scansoriality in a dinosaur comes from William Fox, the Isle of Wight curator and amateur fossil collector, who in 1866 proposed that Calamospondylus oweni from the Lower Cretaceous Wessex Formation of the Isle of Wight might have been in the habit of 'leaping from tree to tree'. The Calamospondylus oweni specimen that Fox referred to was lost, and the actual nature of the fossil remains speculative, but there are various reasons for thinking that it was a theropod. However, it's not entirely accurate to regard Fox's ideas about Calamospondylus as directly relevant to modern speculations about tree-climbing dinosaurs given that, if Fox imagined Calamospondylus oweni as resembling anything familiar, it was probably as a lizard-like reptile, and not as a dinosaur as they are currently understood.
During the early decades of the 20th century the idea of tree-climbing dinosaurs became reasonably popular as Othenio Abel, Gerhard Heilmann and others used comparisons with birds, tree kangaroos and monkeys to argue that the small ornithopod Hypsilophodon (also from the Wessex Formation of the Isle of Wight) was scansorial. Heilmann had come to disagree with this idea and now regarded Hypsilophodon as terrestrial. William Swinton favored the idea of a scansorial Hypsilophodon, concluding that 'it would be able to run up the stouter branches and with hands and tail keep itself balanced until the need for arboreal excursions had passed', and in a 1936 review of Isle of Wight dinosaurs mentioned the idea that small theropods might also have used their clawed hands to hold branches when climbing.
During the 1970s, Peter Galton was able to show that all of the claims made about the forelimb and hindlimb anatomy of Hypsilophodon supposedly favoring a scansorial lifestyle were erroneous, and that this animal was in fact well suited for an entirely terrestrial, cursorial lifestyle. Nevertheless, for several decades Hypsilophodon was consistently depicted as a tree-climber.
In recent decades, Gregory Paul has been influential in arguing that small theropods were capable climbers, and he not only argued for and illustrated scansorial abilities in coelurosaurs, he also proposed that as-yet-undiscovered maniraptorans were highly proficient climbers and included the ancestors of birds. The hypothesized existence of small arboreal theropods that are as yet unknown from the fossil record later proved integral to George Olshevsky's 'Birds Came First' (BCF) hypothesis. Olshevsky argued that all dinosaurs, and in fact all archosaurs, descend from small, scansorial ancestors, and that it is these little climbing reptiles which are the direct ancestors of birds.
Ostrom, Deinonychus and the Dinosaur RenaissanceEdit
In 1964, the first specimen of Deinonychus antirrhopus was discovered in Montana, and in 1969, John Ostrom of Yale University described Deinonychus as a theropod whose skeletal resemblance to birds seemed unmistakable. Since that time, Ostrom had become a leading proponent of the theory that birds are direct descendants of dinosaurs. During the late 1960s, Ostrom and others demonstrated that maniraptoran dinosaurs could fold their arms in a manner similar to that of birds. Further comparisons of bird and dinosaur skeletons, as well as cladistic analysis strengthened the case for the link, particularly for a branch of theropods called maniraptors. Skeletal similarities include the neck, the pubis, the wrists (semi-lunate carpal), the 'arms' and pectoral girdle, the shoulder blade, the clavicle and the breast bone. In all, over a hundred distinct anatomical features are shared by birds and theropod dinosaurs.
Other researchers drew on these shared features and other aspects of dinosaur biology and began to suggest that at least some theropod dinosaurs were feathered. The first restoration of a feathered dinosaur was Sarah Landry's depiction of a feathered "Syntarsus" (now renamed Megapnosaurus or considered a synonym of Coelophysis), in Robert T. Bakker's 1975 publication Dinosaur Renaissance. Gregory S. Paul was probably the first paleoartist to depict maniraptoran dinosaurs with feathers and protofeathers, starting in the late 1980s.
By the 1990s, most paleontologists considered birds to be surviving dinosaurs and referred to 'non-avian dinosaurs' (all extinct), to distinguish them from birds (aves). Before the discovery of feathered dinosaurs, the evidence was limited to Huxley and Ostrom's comparative anatomy. Some mainstream ornithologists, including Smithsonian Institution curator Storrs L. Olson, disputed the links, specifically citing the lack of fossil evidence for feathered dinosaurs.
Modern research and feathered dinosaurs in ChinaEdit
The early 1990s saw the discovery of spectacularly preserved bird fossils in several Early Cretaceous geological formations in the northeastern Chinese province of Liaoning. South American paleontologists, including Fernando Novas and others, discovered evidence showing that maniraptorans could move their arms in a bird-like manner. Gatesy and others suggested that anatomical changes to the vertebral column and hindlimbs occured before birds first evolved, and Xu Xing and colleagues proved that true functional wings and flight feathers evolved in some maniraptorans, all strongly suggesting that these anatomical features were already well-developed before the first birds evolved.
In 1996, Chinese paleontologists described Sinosauropteryx as a new genus of bird from the Yixian Formation, but this animal was quickly recognized as a theropod dinosaur closely related to Compsognathus. Surprisingly, its body was covered by long filamentous structures. These were dubbed 'protofeathers' and considered to be homologous with the more advanced feathers of birds, although some scientists disagree with this assessment. Chinese and North American scientists described Caudipteryx and Protarchaeopteryx soon after. Based on skeletal features, these animals were non-avian dinosaurs, but their remains bore fully-formed feathers closely resembling those of birds. "Archaeoraptor," described without peer review in a 1999 issue of National Geographic, turned out to be a smuggled forgery, but legitimate remains continue to pour out of the Yixian, both legally and illegally. Many newly described feathered dinosaurs preserve horny claw sheaths, integumentary structures (filaments to fully pennaceous feathers), and internal organs. Feathers or "protofeathers" have been found on a wide variety of theropods in the Yixian, and the discoveries of extremely bird-like dinosaurs, as well as dinosaur-like primitive birds, have almost entirely closed the morphological gap between theropods and birds.
Archaeopteryx, the first good example of a "feathered dinosaur", was discovered in 1861. The initial specimen was found in the solnhofen limestone in southern Germany, which is a lagerstätte, a rare and remarkable geological formation known for its superbly detailed fossils. Archaeopteryx is a transitional fossil, with features clearly intermediate between those of modern reptiles and birds. Discovered just two years after Darwin's seminal Origin of Species, its discovery spurred the nascent debate between proponents of evolutionary biology and creationism. This early bird is so dinosaur-like that, without a clear impression of feathers in the surrounding rock, at least one specimen was mistaken for Compsognathus.
Since the 1990s, a number of additional feathered dinosaurs have been found, providing even stronger evidence of the close relationship between dinosaurs and modern birds. Most of these specimens were unearthed in Liaoning province, northeastern China, which was part of an island continent during the Cretaceous period. Though feathers have been found only in the lagerstätte of the Yixian Formation and a few other places, it is possible that non-avian dinosaurs elsewhere in the world were also feathered. The lack of widespread fossil evidence for feathered non-avian dinosaurs may be due to the fact that delicate features like skin and feathers are not often preserved by fossilization and thus are absent from the fossil record.
A recent development in the debate centers around the discovery of impressions of "protofeathers" surrounding many dinosaur fossils. These protofeathers suggest that the tyrannosauroids may have been feathered. However, others claim that these protofeathers are simply the result of the decomposition of collagenous fiber that underlaid the dinosaurs' integument. The Dromaeosauridae family, in particular, seems to have been heavily feathered and at least one dromaeosaurid, Cryptovolans, may have been capable of flight.
Because feathers are often associated with birds, feathered dinosaurs are often touted as the missing link between birds and dinosaurs. However, the multiple skeletal features also shared by the two groups represent the more important link for paleontologists. Furthermore, it is increasingly clear that the relationship between birds and dinosaurs, and the evolution of flight, are more complex topics than previously realized. For example, while it was once believed that birds evolved from dinosaurs in one linear progression, some scientists, most notably Gregory S. Paul, conclude that dinosaurs such as the dromaeosaurs may have evolved from birds, losing the power of flight while keeping their feathers in a manner similar to the modern ostrich and other ratites.
Comparisons of bird and dinosaur skeletons, as well as cladistic analysis, strengthens the case for the link, particularly for a branch of theropods called maniraptors. Skeletal similarities include the neck, pubis, wrist (semi-lunate carpal), arm and pectoral girdle, shoulder blade, clavicle, and breast bone.
At one time, it was believed that dinosaurs lacked furculae, long believed to be a structure unique to birds, that were formed by the fusion of the two collarbones (clavicles) into a single V-shaped structure that helps brace the skeleton against the stresses incurred while flapping. This apparent absence was considered an overwhelming argument to refute the dinosaur ancestry of birds by Danish artist and naturalist Gerhard Heilmann's monumentally influential The Origin of Birds in 1926. That reptiles ancestral to birds, therefore, should, at the very least, show well-developed clavicles. In the book, Heilmann discussed that no clavicles had been reported in any theropod dinosaur. Noting this fact, Heilmann suggested that birds evolved from a more generalized archosaurian ancestor, such as the aptly-named Ornithosuchus (literally, “bird-crocodile”), which is now believed to be closer to the crocodile end of the archosaur lineage. At the time, however, Ornithosuchus seemed to be a likely ancestor of more birdlike creatures.
Contrary to what Heilman believed, paleontologists since the 1980s now accept that clavicles and in most cases furculae are a standard feature not just of theropods but of saurischian dinosaurs. Furculae in dinosaurs is not only limited to maniraptorans, as evidenced by an article by Chure & Madson in which they described a furcula in an allosaurid dinosaur, a non-avian theropod. In 1983, Rinchen Barsbold reported the first dinosaurian furcula from a specimen of the Cretaceous theropod Oviraptor. A furcula-bearing Oviraptor specimen had previously been known since the 1920s, but because at the time the theropod origin of birds was largely dismissed, it was misidentified for sixty years.:9
Following this discovery, paleontologists began to find furculae in other theropod dinosaurs. Wishbones are now known from the dromaeosaur Velociraptor, the allosauroid Allosaurus, and the tyrannosaurid Tyrannosaurus rex, to name a few. Up to late 2007, ossified furculae (i.e. made of bone rather than cartilage) have been found in nearly all types of theropods except the most basal ones, Eoraptor and Herrerasaurus. The original report of a furcula in the primitive theropod Segisaurus (1936) has been confirmed by a re-examination in 2005. Joined, furcula-like clavicles have also been found in Massospondylus, an Early Jurassic sauropodomorph, indicating that the evolution of the furcula was well underway when the earliest dinosaurs were diversifying.
In 2000, Alex Downs reported an isolated furcula found within a block of Coelophysis bauri skeletons from the Late Triassic Rock Point Formation at Ghost Ranch, New Mexico. While it seemed likely that it originally belonged to Coelophysis, the block contained fossils from other Triassic animals as well, and Alex declined to make a positive identification. Currently, a total of five C. bauri furculae have been found in the New Mexico Museum of Natural History's (NMMNH) Ghost Ranch, New Mexico Whitaker Quarry block C-8-82. Three of the furculae are articulated in juvenile skeletons; two of these are missing fragments but are nearly complete, and one is apparently complete. Two years later, Tykoski et al. described several furculae from two species of the coelophysoid genus Syntarsus (now Megapnosaurus), S. rhodesiensis and S. kayentakatae, from the Early Jurassic of Zimbabwe and Arizona, respectively. Syntarsus was long considered to be the genus most closely related to Coelophysis, differing only in a few anatomical details and slightly younger age, so the identification of furculae in Syntarsus made it very likely that the furcula Alex Downs noted in 2000 came from Coelophysis after all. By 2006, wishbones were definitively known from the Early Jurassic Coelophysis rhodesiensis and Coelophysis kayentakatae, and a single isolated furcula was known that might have come from the Late Triassic type species, Coelophysis bauri.
Avian air sacsEdit
Large meat-eating dinosaurs had a complex system of air sacs similar to those found in modern birds, according to an investigation which was led by Patrick O'Connor of Ohio University. The lungs of theropod dinosaurs (carnivores that walked on two legs and had birdlike feet) likely pumped air into hollow sacs in their skeletons, as is the case in birds. "What was once formally considered unique to birds was present in some form in the ancestors of birds", O'Connor said. In a paper published in the online journal Public Library of Science ONE (September 29, 2008), scientists described Aerosteon riocoloradensis, the skeleton of which supplies the strongest evidence to date of a dinosaur with a bird-like breathing system. CT-scanning revealed the evidence of air sacs within the body cavity of the Aerosteon skeleton.
Heart and sleeping postureEdit
Modern computed tomography (CT) scans of a dinosaur chest cavity conducted in 2000 found the apparent remnants of complex four-chambered hearts, much like those found in today's mammals and birds. The idea is controversial within the scientific community, coming under fire for bad anatomical science or simply wishful thinking. The type fossil of the troodont, Mei, is complete and exceptionally well preserved in three-dimensional detail, with the snout nestled beneath one of the forelimbs, similar to the roosting position of modern birds. This demonstrates that the dinosaurs slept like certain modern birds, with their heads tucked under their arms. This behavior, which may have helped to keep the head warm, is also characteristic of modern birds.
A discovery of features in a Tyrannosaurus rex skeleton recently provided more evidence that dinosaurs and birds evolved from a common ancestor and, for the first time, allowed paleontologists to establish the sex of a dinosaur. When laying eggs, female birds grow a special type of bone in their limbs between the hard outer bone and the marrow. This medullary bone, which is rich in calcium, is used to make eggshells. The presence of endosteally derived bone tissues lining the interior marrow cavities of portions of the Tyrannosaurus rex specimen's hind limb suggested that T. rex used similar reproductive strategies, and revealed the specimen to be female. Further research has found medullary bone in the theropod Allosaurus and ornithopod Tenontosaurus. Because the line of dinosaurs that includes Allosaurus and Tyrannosaurus diverged from the line that led to Tenontosaurus very early in the evolution of dinosaurs, this suggests that dinosaurs in general produced medullary tissue. Medullary bone has been found in specimens of sub-adult size, which suggests that dinosaurs reached sexual maturity rather quickly for such large animals. The micro-structure of eggshells and bones has also been determined to be similar to that of birds.
Brooding and care of youngEdit
Several specimens of the Mongolian oviraptorid Citipati was discovered in a chicken-like brooding position resting over the eggs in its nest in 1993, which may mean it was covered with an insulating layer of feathers that kept the eggs warm. All of the nesting specimens are situated on top of egg clutches, with their limbs spread symmetrically on each side of the nest, front limbs covering the nest perimeter. This brooding posture is found today only in birds and supports a behavioral link between birds and theropod dinosaurs. The nesting position of Citipati also supports the hypothesis that it and other oviraptorids had feathered forelimbs. With the 'arms' spread along the periphery of the nest, a majority of eggs would not be covered by the animal's body unless an extensive coat of feathers was present.
A dinosaur embryo was found without teeth, which suggests some parental care was required to feed the young dinosaur, possibly the adult dinosaur regurgitated food into the young dinosaur's mouth (see altricial). This behavior is seen in numerous bird species; parent birds regurgitate food into the hatchling's mouth.
The loss of teeth and the formation of a beak has been shown to have been favorably selected to suit the newly aerodynamical bodies of avian flight in early birds. In the Jehol Biota in China, various dinosaur fossils have been discovered that have a variety of different tooth morphologies, in respect to this evolutionary trend. Sinosauropteryx fossils display unserrated premaxillary teeth, while the maxillary teeth are serrated. In the preserved remains of Protarchaeopteryx, four premaxillary teeth are present that are serrated. The diminutive oviraptorosaur Caudipteryx has four hook-like premaxillary teeth, and in Microraptor zhaoianus, the posterior teeth of this species had developed a constriction that led to a less compressed tooth crown. These dinosaurs exhinit a heterodont dentition pattern that clearly illustrates a transition from the teeth of maniraptorans to those of early, basal birds.
Molecular evidence and soft tissueEdit
One of the best examples of soft tissue impressions in a fossil dinosaur was discovered in Petraroia, Italy. The discovery was reported in 1998, and described the specimen of a small, very young coelurosaur, Scipionyx samniticus. The fossil includes portions of the intestines, colon, liver, muscles, and windpipe of this immature dinosaur.
In the March 2005 issue of Science, Dr. Mary Higby Schweitzer and her team announced the discovery of flexible material resembling actual soft tissue inside a 68-million-year-old Tyrannosaurus rex leg bone from the Hell Creek Formation in Montana. After recovery, the tissue was rehydrated by the science team. The seven collagen types obtained from the bone fragments, compared to collagen data from living birds (specifically, a chicken), reveal that older theropods and birds are closely related.
When the fossilized bone was treated over several weeks to remove mineral content from the fossilized bone marrow cavity (a process called demineralization), Schweitzer found evidence of intact structures such as blood vessels, bone matrix, and connective tissue (bone fibers). Scrutiny under the microscope further revealed that the putative dinosaur soft tissue had retained fine structures (microstructures) even at the cellular level. The exact nature and composition of this material, and the implications of Dr. Schweitzer's discovery, are not yet clear; study and interpretation of the specimens is ongoing.
The successful extraction of ancient DNA from dinosaur fossils has been reported on two separate occasions, but upon further inspection and peer review, neither of these reports could be confirmed. However, a functional visual peptide of a theoretical dinosaur has been inferred using analytical phylogenetic reconstruction methods on gene sequences of related modern species such as reptiles and birds. In addition, several proteins have putatively been detected in dinosaur fossils, including hemoglobin.
Feathers are extremely complex integumentary structures that characterize a handful of vertebrate animals. Although it is generally acknowledged that feathers are derived and evolved from simpler integumentary structures, the early diversification and origin of feathers was relatively unknown until recently, and current research is ongoing. Since the theropod ancestry of birds is widely supported with osteological and other physical lines of evidence, the precursors of feathers in dinosaurs are also present, as predicted by those who originally proposed a theropod origin for birds. In 2006, Chinese paleontologist Xu Xing stated in a paper that since many members of Coelurosauria exhibit miniaturization, primitive integumentary structures (and later on feathers) evolved in order to insulate their small bodies.
The functional view on the evolution of feathers has traditionally focussed on insulation, flight and display. Discoveries of non-flying Late Cretaceous feathered dinosaurs in China however suggest that flight could not have been the original primary function. Feathers in dinosaurs indicate that their original function was not flight, but of a different nature. Theories include insulation brought around after they had metabolically changed from their cold-blooded reptilian ancestors, to increasing running speed. It has been suggested that vaned feathers evolved in the context of thrust, with running, non-avian theropods flapping their arms to increase their running speed.
The following is the generally acknowledged version of the origin and early evolution of feathers:
- The first feathers evolved; they are single filaments.
- Branching structures developed.
- The rachis evolved.
- Pennaceous feathers evolved.
- Aerodynamic morphologies appeared. (curved shaft and asymmetrical vanes)
This scenario appears to indicate that downy, contour, and flight feathers, are more derived forms of the first "feather". However, it is also possible that protofeathers and basal feathers disappeared early on in the evolution of feathers and that more primitive feathers in modern birds are secondary. This would imply that the feathers in modern birds have nothing to do with protofeathers.
A recent study performed by Prum and Brush (2002) suggested that the feathers of birds are not homologous with the scales of reptiles. A new model of feather evolution posits that the evolution of feathers first began with a feather follicle merging from the skin's surface that has no relation to reptilian scales. After this initial event, additions and new morphological characteristics were added to the feather design and more complex feathers evolved. This model of feather evolution, while agreeing with the distribution of various feather morphologies in coelurosaurs, it is also at odds with other evidence. The feather bristles of modern-day turkeys resemble the hair-like integumentary strcutures found in some maniraptorans, pterosaurs (see Pterosauria#Pycnofibers), and ornithischians, are widely regarded to be homologous to modern feathers, yet share also show distinct, feather like characteristics. This has led some paleontologists, such as Xu Xing, to theorize that feathers share homology with lizard scales after all.
- Stage I: Tubular filaments and feather-type beta keratin evolved.[Note 3]
- Stage II: The filamentous structure evolved distal branches.[Note 4]
- Stage III: Xu Xing described this stage as being the most important stage. The main part of the modern feather, the feather follicle, appeared along with the rachises and planar forms developed.[Note 5]
- Stage IV: Large, stiff, well-developed pennaceous feathers evolved on the limbs and tails of maniraptoran dinosaurs. Barbules evolved.[Note 6]
- Stage V: Feather tracts (pennaceous feathers that are located on regions other than the limbs and tail) evolved. Specialized pennaceous feathers developed.
Xu Xing himself stated that this new model was similar to the one out forward by Richard Prum, with the exception that Xu's model posits that feathers "feature a combination of transformation and innovation". This view differs from Prum's model in that Prum suggested that feathers were purely an evolutionary novelty. Xing's new model also suggests that the tubular filaments and branches evolved before the appearance of the feather follicle, while also acknowledging that the follicle was an important development in feather evolution, also in contrast to Prum's model of feather evolution.
Primitive feather typesEdit
The evolution of feather structures is thought to have proceeded from simple hollow filaments through several stages of increasing complexity, ending with the large, deeply rooted, feathers with strong pens (rachis), barbs and barbules that birds display today. It is logical that the simplest structures were probably most useful as insulation, and that this implies homeothermy. Only the more complex feather structures would be likely candidates for aerodynamic uses.
Models of feather evolution are often proposed that the earliest prototype feathers were hair-like integumentary filaments similar to the structures of Sinosauropteryx, a compsognathid (Jurassic/Cretaceous, 150-120 Ma), and Dilong, a basal tyrannosauroid from the Early Cretaceous. It is not known with certainty at what point in archosaur phylogeny the earliest simple “protofeathers” arose, or if they arose once or, independently, multiple times. Filamentous structures are clearly present in pterosaurs, and long, hollow quills have been reported in a specimen of Psittacosaurus from Liaoning. It is thus possible that the genes for building simple integumentary structures from beta keratin arose before the origin of dinosaurs, possibly in the last common ancestor with pterosaurs – the basal Ornithodire.
In Prum's model of feather evolution, hollow quill-like integumentary structures of this sort were termed Stage 1 feathers. The idea that feathers started out as hollow quills also supports Alan Brush's idea that feathers are evolutionary novelties, and not derived from scales. However, in order to determine the homology of Stage 1 feathers, it is necessary to determine their proteinaceous content: unlike the epidermal appendages of all other vertebrates, feathers are almost entirely composed of beta-keratins (as opposed to alpha-keratins) and, more specifically, they are formed from a group of beta-keratins called phi-keratins. No studies have yet been performed on the Stage 1 structures of Sinosauropteryx or Dilong in order to test their proteinaceous composition, however, tiny filamentous structures discovered adjacent to the bones of the alvarezsaurid Shuvuuia have been tested for beta-keratin, and the structures were discovered to be composed of beta-keratin. Alvarezsaurids have been of controversial phylogenetic position, but are generally agreed to be basal members of the Maniraptora clade. Due to this discovery, paleontologists are now convinced that beta-keratin-based protofeathers had evolved at the base of this clade at least.
Vaned, pennaceous feathersEdit
While basal coelurosaurs possessed these apparently hollow quill-like 'Stage 1' filaments, they lacked the more complex structures seen in maniraptorans. Maniraptorans possessed vaned feathers with barbs, barbules and hooklets just like those of modern birds.
The first dinosaur fossils from the Yixian formation found to have true flight-structured feathers (pennaceous feathers) were Protarchaeopteryx and Caudipteryx (135-121 Ma). Due to the size and proportions of these animals it is more likely that their feathers were used for display rather than for flight. Subsequent dinosaurs found with pennaceous feathers include Pedopenna and Jinfengopteryx. Several specimens of Microraptor, described by Xu et al. in 2003, show not only pennaceous feathers but also true asymmetrical flight feathers, present on the fore and hind limbs and tail. Asymmetrical feathers are considered important for flight in birds. Before the discovery of Microraptor gui, Archaeopteryx was the most primitive known animal with asymmetrical flight feathers.
However, the bodies of maniraptorans were not covered in vaned feathers as are those of the majority of living birds: instead, it seems that they were at least partly covered in the more simple structures that they had inherited from basal coelurosaurs like Sinosauropteryx. This condition may have been retained all the way up into basal birds: despite all those life restorations clothing archaeopterygids in vaned breast, belly, throat and neck feathers, it seems that their bodies also were at least partly covered in the more simple filamentous structures. The Berlin Archaeopteryx specimen appears to preserve such structures on the back of the neck though pennaceous vaned feathers were present on its back, at least.
Though it has been suggested at times that vaned feathers simply must have evolved for flight, the phylogenetic distribution of these structures currently indicates that they first evolved in flightless maniraptorans and were only later exapted by long-armed maniraptorans for use in locomotion. Of course a well-known minority opinion, best known from the writings of Gregory Paul, is that feathered maniraptorans are secondarily flightless and descend from volant bird-like ancestors. While this hypothesis remains possible, it lacks support from the fossil record, though that may or may not mean much, as the fossil record is incomplete and prone to selection bias.
The discovery of Epidexipteryx represented the earliest known examples of ornamental feathers in the fossil record. Epidexipteryx is known from a well preserved partial skeleton that includes four long feathers on the tail, composed of a central rachis and vanes. However, unlike in modern-style rectrices (tail feathers), the vanes were not branched into individual filaments but made up of a single ribbon-like sheet. Epidexipteryx also preserved a covering of simpler body feathers, composed of parallel barbs as in more primitive feathered dinosaurs. However, the body feathers of Epidexipteryx are unique in that some appear to arise from a "membranous structure." The skull of Epidexipteryx is also unique in a number of features, and bears an overall similarity to the skull of Sapeornis, oviraptorosaurs and, to a lesser extent, therizinosauroids. The tail of Epidexipteryx bore unusual vertebrae towards the tip which resembled the feather-anchoring pygostyle of modern birds and some oviraptorosaurs. Despite its close relationship to avialan birds, Epidexipteryx appears to have lacked remiges (wing feathers), and it likely could not fly. Zhang et al. suggest that unless Epidexipteryx evolved from flying ancestors and subsequently lost its wings, this may indicate that advanced display feathers on the tail may have predated flying or gliding flight.
According to the model of feather evolution developed by Prum & Brush, feathers started out ('stage 1') as hollow cylinders, then ('stage 2') became unbranched barbs attached to a calamus. By stage 3, feathers were planar structures with the barbs diverging from a central rachis, and from there pennaceous feathers. The feathers of Epidexipteryx may represent stage 2 structures, but also suggests that a more complicated sequence of steps in the evolution of feathers took place.
Use in predationEdit
Several maniraptoran lineages were clearly predatory and, given the morphology of their manual claws, fingers and wrists, presumably in the habit of grabbing at prey with their hands. Contrary to popular belief, feathers on the hands would not have greatly impeded the use of the hands in predation. Because the feathers are attached at an angle roughly perpendicular to the claws, they are oriented tangentially to the prey's body, regardless of prey size.:315 It is important to note here that theropod hands appear to have been oriented such that the palms faced medially (facing inwards), and were not parallel to the ground as used to be imagined.
However, feathering would have interfered with the ability of the hands to bring a grasped object up toward the mouth given that extension of the maniraptoran wrist would have caused the hand to rotate slightly upwards on its palmar side. If both feathered hands are rotated upwards and inwards at the same time, the remiges from one hand would collide with those of the other. For this reason, maniraptorans with feathered hands could grasp objects, but would probably not be able to carry them with both hands. However, dromaeosaurids and other maniraptorans may have solved this problem by clutching objects single-handedly to the chest. Feathered hands would also have restricted the ability of the hands to pick objects off of the ground, given that the feathers extend well beyond the ends of the digits. It remains possible that some maniraptorans lacked remiges on their fingers, but the only evidence available indicates the contrary. It has recently been argued that the particularly long second digit of the oviraptorosaur Chirostenotes was used as a probing tool, locating and extracting invertebrates and small mammals and so on from crevices and burrows. It seems highly unlikely that a digit that is regularly thrust into small cavities would have had feathers extending along its length, so either Chirostenotes didn't probe as proposed, or its second finger was unfeathered, unlike that of Caudipteryx and the other feathered maniraptorans. Given the problems that the feathers might have posed for clutching and grabbing prey from the ground, we might also speculate that some of these dinosaurs deliberately removed their own remiges by biting them off. Some modern birds (notably motmots) manipulate their own feathers by biting off some of the barbs, so this is at least conceivable, but no remains in the fossil record have been recovered that support this conclusion.
Some feather morphologies in non-avian theropods are comparable to those on modern birds. Single filament like structures are not present in modern feathers, although some birds possess highly specialized feathers that are superficial in appearance to protofeathers in non-avian theropods. Tuft-like structures seen in some maniraptorans are similar to that of the natal down in modern birds. Similarly, structures in the fossil record composed of a series of filaments joined at their bases along a central filament bear an uncanny resemblance to the down feathers in modern birds, with the exception of a lack of barbules. Furthermore, structures on fossils have been recovered from Chinese Cretaceous deposits that are a series of filaments joined at their bases at the distal portion of the central filament bear a superficial resemblance to filoplumes. More derived, pennaceous, feathers on the tails and limbs of feathered dinosaurs are nearly identical to the remiges and retrices of modern birds.
Feather structures and anatomyEdit
Feathers vary in length according to their position on the body, with the filaments of the compsognathid Sinosauropteryx being 13 mm and 21 mm long on the neck and shoulders respectively. In contrast, the structures on the skull are about 5 mm long, those on the arm about 2 mm long, and those on the distal part of the tail about 4 mm long. Because the structures tend to be clumped together it is difficult to be sure of an individual filament's morphology. The structures might have been simple and unbranched, but Currie & Chen (2001) thought that the structures on Sinosauropteryx might be branched and rather like the feathers of birds that have short quills but long barbs. The similar structures of Dilong also appear to exhibit a simple branching structure.
Exactly how feathers were arranged on the arms and hands of both basal birds and non-avian maniraptorans has long been unclear, and both non-avian maniraptorans and archaeopterygids have conventionally been depicted as possessing unfeathered fingers. However, the second finger is needed to support the remiges,[Note 7] and therefore must have been feathered. Derek Yalden's 1985 study was important in showing exactly how the remiges would have grown off of the first and second phalanges of the archaeopterygid second finger, and this configuration has been widely recognized.:129-159
However, there has been some minor historical disagreement over exactly how many remiges were present in archaeopterygids (there were most likely 11 primaries and a tiny distal 12th one, and at least 12 secondaries), and also about how the hand claws were arranged. The claws were directed perpendicularly to the palmar surface in life, and rotated anteriorly in most (but not all) specimens during burial.:129-159 It has also been suggested on occasion that the fingers of archaeopterygids and other feathered maniraptorans were united in a single fleshy 'mitten' as they are in modern birds, and hence unable to be employed in grasping. However, given that the interphalangeal finger joints of archaeopterygids appear suited for flexion and extension, and that the third finger apparently remained free and flexible in birds more derived than archaeopterygids, this is unlikely to be correct; it's based on a depression in the sediment that was identified around the bones.
Like those of archaeopterygids and modern birds, the remiges of non-avian theropods would also have been attached to the phalanges of the second manual digit as well as to the metacarpus and ulna, and indeed we can see this in the fossils. It's the case in the sinornithosaur NGMC 91-A and Microraptor. Surprisingly, in Caudipteryx, the remiges are restricted to the hands alone, and don't extend from the arm. They seem to have formed little 'hand flags' that are unlikely to have served any function other than display. Caudipteryx is an oviraptorosaur and possesses a suite of characters unique to this group. It is not a member of Aves, despite the efforts of some workers to make it into one. The hands of Caudipteryx supported symmetrical, pennaceous feathers that had vanes and barbs, and that measured between 15–20 centimeters long (6–8 inches). These primary feathers were arranged in a wing-like fan along the second finger, just like primary feathers of birds and other maniraptorans. No fossil of Caudipteryx preserves any secondary feathers attached to the forearms, as found in dromaeosaurids, Archaeopteryx and modern birds. Either these arm feathers are not preserved, or they were not present on Caudipteryx in life. An additional fan of feathers existed on its short tail. The shortness and symmetry of the feathers, and the shortness of the arms relative to the body size, indicate that Caudipteryx could not fly. The body was also covered in a coat of short, simple, down-like feathers.
A small minority, including ornithologists Alan Feduccia and Larry Martin, continues to assert that birds are instead the descendants of earlier archosaurs, such as Longisquama or Euparkeria. Embryological studies of bird developmental biology have raised questions about digit homology in bird and dinosaur forelimbs.
Opponents also claim that the dinosaur-bird hypothesis is dogma, apparently on the grounds that those who accept it have not accepted the opponents' arguments for rejecting it. However, science does not require unanimity and does not force agreement, nor does science settle issues by vote. It has been over 25 years since John Ostrom first put forth the dinosaur-bird hypothesis in a short article in Nature, and the opponents of this theory have yet to propose an alternative, testable hypothesis. However, due to the cogent evidence provided by comparative anatomy and phylogenetics, as well as the dramatic feathered dinosaur fossils from China, the idea that birds are derived dinosaurs, first championed by Huxley and later by Nopcsa and Ostrom, enjoys near-unanimous support among today's paleontologists.
BADD, BAND, and the Birds Came First hypothesisEdit
- Main article: Birds Came First
The non-standard, non-mainstream Birds Came First (or BCF) hypothesis proposed by George Olshevsky suggests that while there is a close relationship between dinosaurs and birds, but argues that, merely given this relationship, it is just as likely that dinosaurs descended from birds as the other way around. The hypothesis does not propose that birds in the proper sense evolved earlier than did other dinosaurs or other archosaurs: rather, it posits that small, bird-like, arboreal archosaurs were the direct ancestors of all the archosaurs that came later on (proper birds included). George was aware of this fact, and apparently considered the rather tongue-in-cheek alternative acronym GOODD, meaning George Olshevsky On Dinosaur Descendants. This was, of course, meant as opposite to the also tongue-in-cheek BADD (Birds Are Dinosaur Descendants): the term George uses for the 'conventional' or 'mainstream' view of avian origins outlined in the first two paragraphs above. 'BADD' is bad, according to BCF, as it imagines that small size, feathers and arboreal habits all evolved very late in archosaur evolution, and exclusively within maniraptoran theropod dinosaurs.
Protoavis is a Late Triassic archosaurian whose fossilized remains were found near Post, Texas. These fossils have been described as a primitive bird which, if the identification is valid, would push back avian origins some 60-75 million years.
Though it existed far earlier than Archaeopteryx, its skeletal structure is allegedly more bird-like. The fossil bones are too badly preserved to allow an estimate of flying ability; although reconstructions usually show feathers, judging from thorough study of the fossil material there is no indication that these were present.
However, this description of Protoavis assumes that Protoavis has been correctly interpreted as a bird. Almost all paleontologists doubt that Protoavis is a bird, or that all remains assigned to it even come from a single species, because of the circumstances of its discovery and unconvincing avian synapomorphies in its fragmentary material. When they were found at a Dockum Formation quarry in the Texas panhandle in 1984, in a sedimentary strata of a Triassic river delta, the fossils were a jumbled cache of disarticulated bones that may reflect an incident of mass mortality following a flash flood.
Scientists such as Alan Feduccia have cited Protoavis in an attempt to refute the hypothesis that birds evolved from dinosaurs. However, the only consequence would be to push back the point of divergence further back in time. At the time when such claims were originally made, the affiliation of birds and maniraptoran theropods which today is well-supported and generally accepted by most ornithologists was much more contentious; most Mesozoic birds have only been discovered since then. Chatterjee himself has since used Protoavis to support a close relationship between dinosaurs and birds.
"As there remains no compelling data to support the avian status of Protoavis or taxonomic validity thereof, it seems mystifying that the matter should be so contentious. The author very much agrees with Chiappe in arguing that at present, Protoavis is irrelevant to the phylogenetic reconstruction of Aves. While further material from the Dockum beds may vindicate this peculiar archosaur, for the time being, the case for Protoavis is non-existent."
Claimed temporal paradoxEdit
The temporal paradox, or time problem is a controversial issue in the evolutionary relationships of feathered dinosaurs and birds. It was originally conceived of by paleornithologist Alan Feduccia. The concept is based on the apparent following facts. The consensus view is that birds evolved from dinosaurs, but the most bird-like dinosaurs, and those most closely related to birds (the maniraptorans), are known mostly from the Cretaceous, by which time birds had already evolved and diversified. If bird-like dinosaurs are the ancestors of birds they should be older than birds, but Archaeopteryx is 155 million years old, while the very bird-like Deinonychus is 35 million years younger. This idea is sometimes summarized as "you can't be your own grandmother". The development of avian characteristics in dinosaurs supposedly should have led to the first modern bird appearing about 60 million years ago. However, Archaeopteryx lived 150 million years ago, long before any of the bird changes took place in dinosaurs. Each of the feathered dinosaur families developed avian-like features in its own way. Thus there were many several different lines of evolution. Archaeopteryx was merely the result of one such line.
Numerous researchers have discredited the idea of the temporal paradox. Witmer (2002) summarized this critical literature by pointing out that there are at least three lines of evidence that contradict it. First, no one has proposed that maniraptoran dinosaurs of the Cretaceous are the ancestors of birds. They have merely found that dinosaurs like dromaeosaurs, troodontids and oviraptorosaurs are close relatives of birds. The true ancestors are thought to be older than Archaeopteryx, perhaps Early Jurassic or even older. The scarcity of maniraptoran fossils from this time is not surprising, since fossilization is a rare event requiring special circumstances and fossils may never be found of animals in sediments from ages that they actually inhabited. Secondly, fragmentary remains of maniraptoran dinosaurs actually have been known from Jurassic deposits in China, North America, and Europe for many years. The femur of a tiny maniraptoran from the Late Jurassic of Colorado was reported by Padian and Jensen in 1989. In a 2009 article in the journal Acta Palaeontologica Polonica, six velociraptorine dromaeosaurid teeth were described as being recovered from a bone bed in Langenberg Quarry of Oker (Goslar, Germany). These teeth are notable in this instance in that they dated back to the Kimmeridgian stage of the Late Jurassic, roughly 155-150 Ma, and represent some of the earliest dromaeosaurids known to science, further refuting a "temporal paradox". Furthermore, a small, as of yet undescribed troodontid known as WDC DML 001, was announced in 2003 as being found in the Late Jurassic Morrison Formation of eastern/central Wyoming. The presence of this derived maniraptoran in Jurassic sediments is a strong refutation of the "temporal paradox". Third, if the temporal paradox would indicate that birds should not have evolved from dinosaurs, then what animals are more likely ancestors considering their age? Brochu and Norell (2001) analyzed this question using several of the other archosaurs that have been proposed as bird ancestors, and found that all of them create temporal paradoxes—long stretches between the ancestor and Archaeopteryx where there are no intermediate fossils—that are actually worse. Thus, even if one used the logic of the temporal paradox, one should still prefer dinosaurs as the ancestors to birds.
Quick & Ruben (2009)Edit
In Quick & Ruben's 2009 paper, they argue that modern birds are fundamentally different from non-avian dinosaurs in terms of abdominal soft-tissue morphology, and therefore birds cannot be modified dinosaurs. The paper asserts that a specialized 'femoral-thigh complex', combined with a synsacrum and ventrally separated pubic bones, provides crucial mechanical support for the abdominal wall in modern birds, and has thereby allowed the evolution of large abdominal air-sacs that function in respiration. In contrast, say the authors, theropod dinosaurs lack these features and had a highly mobile femur that cannot have been incorporated into abdominal support. Therefore, non-avian theropods cannot have had abdominal air-sacs that functioned like those of modern birds, and non-avian theropods were fundamentally different from modern birds. However, this was not mentioned in the paper, but was of course played-up in the press interviews. The paper never really demonstrate anything, but merely try to shoot holes in a given line of supporting evidence. It has been argued that respiratory turbinates supposedly falsify dinosaur endothermy, even though it has never been demonstrated that respiratory turbinates really are a requirement for any given physiological regime, and even though there are endotherms that lack respiratory turbinates. The innards of Sinosauropteryx and Scipionyx also supposedly falsify avian-like air-sac systems in non-avian coelurosaurs and demonstrate a crocodilian-like hepatic piston diaphragm, even though personal interpretation is required to accept that this claim might be correct. Furthermore, even though crocodilians and dinosaurs are fundamentally different in pelvic anatomy, some living birds have the key soft-tissue traits reported by Ruben et al. in Sinosauropteryx and Scipionyx, and yet still have an avian respiratory system. For a more detailed rebuttal of Quick & Ruben's paper, see this post by Darren Naish at Tetrapod Zoology.
There have been claims that the supposed feathers of the Chinese fossils are a preservation artifact. Despite doubts, the fossil feathers have roughly the same appearance as those of birds fossilized in the same locality, so there is no serious reason to think they are of different nature; moreover, no non-theropod fossil from the same site shows such an artifact, but sometimes show unambiguous hair (some mammals) or scales (some reptiles).
Some researchers have interpreted the filamentous impressions around Sinosauropteryx fossils as remains of collagen fibers, rather than primitive feathers. Since they are clearly external to the body, these researchers have proposed that the fibers formed a frill on the back of the animal and underside of its tail, similar to some modern aquatic lizards.
This would refute the proposal that Sinosauropteryx is the most basal known theropod genus with feathers, and also questions the current theory of feather origins itself. It calls into question the idea that the first feathers evolved not for flight but for insulation, and that they made their first appearance in relatively basal dinosaur lineages that later evolved into modern birds.
The Archaeoraptor fakeEdit
- Main article: Archaeoraptor
In 1999, a supposed 'missing link' fossil of an apparently feathered dinosaur named "Archaeoraptor liaoningensis", found in Liaoning Province, northeastern China, turned out to be a forgery. Comparing the photograph of the specimen with another find, Chinese paleontologist Xu Xing came to the conclusion that it was composed of two portions of different fossil animals. His claim made National Geographic review their research and they too came to the same conclusion. The bottom portion of the "Archaeoraptor" composite came from a legitimate feathered dromaeosaurid now known as Microraptor, and the upper portion from a previously-known primitive bird called Yanornis.
Flying and glidingEdit
The ability to fly or glide has been suggested for at least two dromaeosaurid species. The first, Rahonavis ostromi (originally classified as avian bird, but found to be a dromaeosaurid in later studies) may have been capable of powered flight, as indicated by its long forelimbs with evidence of quill knob attachments for long sturdy flight feathers. The forelimbs of Rahonavis were more powerfully built than Archaeopteryx, and show evidence that they bore strong ligament attachments necessary for flapping flight. Luis Chiappe concluded that, given these adaptations, Rahonavis could probably fly but would have been more clumsy in the air than modern birds.
Another species of dromaeosaurid, Microraptor gui, may have been capable of gliding using its well-developed wings on both the fore and hind limbs. Microraptor was among the first non-avian dinosaurs discovered with the impressions of feathers and wings. On Microraptor, the long feathers on the forelimbs possess asymmetrical vanes. The external vanes are narrow, while the internal ones are broad. In addition, Microraptor possessed elongated remiges with asymmetrical vanes that demonstrate aerodynamic function on the hind limbs. A 2005 study by Sankar Chatterjee suggested that the wings of Microraptor functioned like a split-level "biplane", and that it likely employed a phugoid style of gliding, in which it would launch from a perch and swoop downward in a 'U' shaped curve, then lift again to land on another tree, with the tail and hind wings helping to control its position and speed. Chatterjee also found that Microraptor had the basic requirements to sustain level powered flight in addition to gliding.
Microraptor had two sets of wings, on both its forelegs and hind legs. The long feathers on the legs of Microraptor were true flight feathers as seen in modern birds, with asymmetrical vanes on the arm, leg, and tail feathers. As in bird wings, Microraptor had both primary (anchored to the hand) and secondary (anchored to the arm) flight feathers. This standard wing pattern was mirrored on the hind legs, with flight feathers anchored to the upper foot bones as well as the upper and lower leg. It had been proposed by Chinese scientists that the animal glided and probably lived in trees, pointing to the fact that wings anchored to the feet of Microraptor would have hindered their ability to run on the ground, and suggest that all primitive dromaeosaurids may have been arboreal.
Sankar Chatterjee determined in 2005 that, in order for the creature to glide or fly, the wings must have been on different levels (as on a biplane) and not overlaid (as on a dragonfly), and that the latter posture would have been anatomically impossible. Using this biplane model, Chatterjee was able to calculate possible methods of gliding, and determined that Microraptor most likely employed a phugoid style of gliding—launching itself from a perch, the animal would have swooped downward in a deep 'U' shaped curve and then lifted again to land on another tree. The feathers not directly employed in the biplane wing structure, like those on the tibia and the tail, could have been used to control drag and alter the flight path, trajectory, etc. The orientation of the hind wings would also have helped the animal control its gliding flight. In 2007, Chatterjee used computer algorithms that test animal flight capacity to determine whether or not Microraptor was capable of true, powered flight, in addition to passive gliding. The resulting data showed that Microraptor did have the requirements to sustain level powered flight, so it is theoretically possible that the animal flew on occasion in addition to gliding.
Saurischian integumentary structuresEdit
The hip structure possessed by modern birds actually evolved independently from the "lizard-hipped" saurischians (specifically, a sub-group of saurischians called the Maniraptora) in the Jurassic Period. In this example of convergent evolution, birds developed hips oriented similar to the earlier ornithischian hip anatomy, in both cases possibly as an adaptation to a herbivorous or omnivorous diet.
In Saurischia, maniraptorans are characterized by long arms and three-fingered hands, as well as a "half-moon shaped" (semi-lunate) bone in the wrist (carpus). Maniraptorans are the only dinosaurs known to have breast bones (ossified sternal plates). In 2004, Tom Holtz and Halszka Osmólska pointed out six other maniraptoran characters relating to specific details of the skeleton. Unlike most other saurischian dinosaurs, which have pubic bones that point forward, several groups of maniraptorans have an ornithischian-like backwards-pointing hip bone. A backward-pointing hip characterizes the therizinosaurs, dromaeosaurids, avialans, and some primitive troodontids. The fact that the backward-pointing hip is present in so many diverse maniraptoran groups has led most scientists to conclude that the "primitive" forward-pointing hip seen in advanced troodontids and oviraptorosaurs is an evolutionary reversal, and that these groups evolved from ancestors with backward-pointing hips.
Modern pennaceous feathers and remiges are known from advanced maniraptoran groups (Oviraptorosauria and Paraves). More primitive maniraptorans, such as therizinosaurs (specifically Beipiaosaurus), preserve a combination of simple downy filaments and unique elongated quills. Powered and/or gliding flight is present in members of Avialae, and possibly in some dromaeosaurids such as Rahonavis and Microraptor. Simple feathers are known from more primitive coelurosaurs such as Sinosauropteryx, and possibly from even more distantly related species such as the ornithischian Tianyulong and the flying pterosaurs. Thus it appears as if some form of feathers or down-like integument would have been present in all maniraptorans, at least when they were young.
Skin impressions from the type specimen of Beipiaosaurus inexpectus indicated that the body was covered predominately by downy feather-like fibers, similar to those of Sinosauropteryx, but longer, and are oriented perpendicular to the arm. Xu et al., who described the specimen, suggested that these downy feathers represent an intermediate stage between Sinosauropteryx and more advanced birds (Avialae).
Unique among known theropods, Beipiaosaurus also possessed a secondary coat of much longer, simpler feathers that rose out of the down layer. These unique feathers (known as EBFFs, or elongated broad filamentous feathers) were first described by Xu et al. in 2009, based on a specimen consisting of the torso, head and neck. Xu and his team also found EBFFs in the original type specimen of B. inexpectus, revealed by further preparation. The holotype also preserved a pygostyle-like structure. The holotype was discovered in two phases. Limb fragments and dorsal and cervical vertebrae were discovered initially. The discovery site was re-excavated later on, and this time an articulated tail and partial pelvis were discovered. All come from the same individual.
The holotype has the largest proto-feathers known of any feathered dinosaur, with the author and paleontologist Xing Xu stating: "Most integumentary filaments are about 50 mm in length, although the longest is up to 70 mm. Some have indications of branching distal ends.". The holotype also had preserved dense patches of parallel integumentary structures in association with its lower arm and leg.
Thick, stiff, spine-like structures were recovered sprouting from the new specimen's throat region, the back of its head, its neck and its back. New preparation of the holotype reveals that the same structures are also present on the tail (though not associated with the pygostyle-like structure).
The EBFFs differ from other feather types in that they consist of a single, unbranched filament. Most other primitive feathered dinosaurs have down-like feathers made up of two or more filaments branching out from a common base or along a central shaft. The EBFFs of Beipiaosaurus are also much longer than other primitive feather types, measuring about 100-150 millimeters (4-6 inches) long, roughly half the length of the neck. In Sinosauropteryx, the longest feathers are only about 15% of the neck length. The EBFFs of Beipiaosaurus are also unusually broad, up to 3 mm wide in the type specimen. The broadest feathers of Sinosauropteryx are only 0.2 mm wide, and only slightly wider in larger forms such as Dilong. Additionally, where most primitive feather types are circular in cross section, EBFFs appear to be oval-shaped. None of the preserved EBFFs were curved or bent beyond a broad arc in either specimen, indicating that they were fairly stiff. They were probably hollow, at least at the base.
In a 2009 interview, Xu stated: "Both [feather types] are definitely not for flight, inferring the function of some structures of extinct animals would be very difficult, and in this case, we are not quite sure whether these feathers are for display or some other functions." He speculated that the finer feathers served as an insulatory coat and that the larger feathers were ornamental, perhaps for social interactions such as mating or communication.
Long filamentous structures have been preserved along with skeletal remains of numerous coelurosaurs from the Early Cretaceous Yixian Formation and other nearby geological formations from Liaoning, China. These filaments have usually been interpreted as "protofeathers," homologous with the branched feathers found in birds and some non-avian theropods, although other hypotheses have been proposed. A skeleton of Dilong was described in the scientific journal Nature in 2004 that included the first example of "protofeathers" in a tyrannosauroid from the Yixian Formation of China. Similarly to down feathers of modern birds, the "protofeathers" found in Dilong were branched but not pennaceous, and may have been used for insulation.
The presence of "protofeathers" in basal tyrannosauroids is not surprising, since they are now known to be characteristic of coelurosaurs, found in other basal genera like Sinosauropteryx, as well as all more derived groups. Rare fossilized skin impressions of large tyrannosaurids lack feathers, however, instead showing skin covered in scales. While it is possible that protofeathers existed on parts of the body which have not been preserved, a lack of insulatory body covering is consistent with modern multi-ton mammals such as elephants, hippopotamuses, and most species of rhinoceros. Alternatively, secondary loss of "protofeathers" in large tyrannosaurids may be analogous with the similar loss of hair in the largest modern mammals like elephants, where a low surface area-to-volume ratio slows down heat transfer, making insulation by a coat of hair unnecessary. Therefore, as large animals evolve in or disperse into warm climates, a coat of fur or feathers loses its selective advantage for thermal insulation and can instead become a disadvantage, as the insulation traps excess heat inside the body, possibly overheating the animal. Protofeathers may also have been secondarily lost during the evolution of large tyrannosaurids, especially in warm Cretaceous climates. Tyrannosaurus at one stage of its life may have been covered in down-like feathers, although there is no direct fossil evidence of this.
A few troodont fossils, including specimens of Mei and Sinornithoides, demonstrate that these animals roosted like birds, with their heads tucked under their forelimbs. These fossils, as well as numerous skeletal similarities to birds and related feathered dinosaurs, support the idea that troodontids probably bore a bird-like feathered coat. The discovery of a fully-feathered, primitive troodontid (Jinfengopteryx) lends support to this. The type specimen of Jinfengopteryx elegans is 55 cm long and from the Qiaotou Formation of Liaoning Province, China.
Troodontids are important to research on the origin of birds because they share many anatomical characters with early birds. Crucially, the substantially complete fossil identified as WDC DML 001 ("Lori"), is a troodontid from the Late Jurassic Morrison Formation, close to the time of Archaeopteryx. The discovery of this Jurassic troodont is positive physical evidence that derived deinonychosaurs were present very near the time that birds arose, and basal paravians must have evolved much earlier. This fact strongly invalidates the "temporal paradox" cited by the few remaining opponents of the idea that birds are closely related to dinosaurs. (see claimed temporal paradox below.)
There is a large body of evidence showing that dromaeosaurids were covered in feathers. Some dromaeosaurid fossils preserve long, pennaceous feathers on the hands and arms (remiges) and tail (rectrices), as well as shorter, down-like feathers covering the body. Other fossils, which do not preserve actual impressions of feathers, still preserve the associated bumps on the forearm bones where long wing feathers would have attached in life. Overall, this feather pattern looks very much like Archaeopteryx.
The first known dromaeosaur with definitive evidence of feathers was Sinornithosaurus, reported from China by Xu et al. in 1999. NGMC 91-A, the Sinornithosaurus-like theropod informally dubbed "Dave", possessed unbranched fibers in additional to more complex branched and tufted structures. Many other dromaeosaurid fossils have been found with feathers covering their bodies, some with fully-developed feathered wings. Several even show evidence of a second pair of wings on the hind legs, including Microraptor and Cryptovolans. While direct feather impressions are only possible in fine-grained sediments, some fossils found in coarser rocks show evidence of feathers by the presence of quill knobs, the attachment points for wing feathers possessed by some birds. The dromaeosaurids Rahonavis and Velociraptor have both been found with quill knobs, showing that these forms had feathers despite no impressions having been found. In light of this, it is most likely that even the larger ground-dwelling dromaeosaurids bore feathers, since even flightless birds today retain most of their plumage, and relatively large dromaeosaurids, like Velociraptor, are known to have retained pennaceous feathers. Though some scientists had suggested that the larger dromaeosaurids lost some or all of their insulatory covering, the discovery of feathers in Velociraptor specimens has been cited as evidence that all members of the family retained feathers.
Fossils of dromaeosaurids more primitive than Velociraptor are known to have had feathers covering their bodies, and fully developed, feathered wings. The fact that the ancestors of Velociraptor were feathered and possibly capable of flight long suggested to paleontologists that Velociraptor bore feathers as well, since even flightless birds today retain most of their feathers.
In September 2007, Alan Turner, Peter Makovicky, and Mark Norell reported the presence of quill knobs on the ulna of a Velociraptor specimen from Mongolia. Fourteen bumps approximately 4mm apart were found in a straight line along the bone, directly corresponding to the same structures in living birds, the bumps serving as an anchor for the secondary feathers. These bumps on bird wing bones show where feathers anchor, and their presence on Velociraptor indicate it too had feathers. According to paleontologist Alan Turner,
A lack of quill knobs does not necessarily mean that a dinosaur did not have feathers. Finding quill knobs on Velociraptor, though, means that it definitely had feathers. This is something we'd long suspected, but no one had been able to prove.
Co-author Mark Norell, Curator-in-Charge of fossil reptiles, amphibians and birds at the American Museum of Natural History, also weighed in on the discovery, saying:
The more that we learn about these animals the more we find that there is basically no difference between birds and their closely related dinosaur ancestors like velociraptor. Both have wishbones, brooded their nests, possess hollow bones, and were covered in feathers. If animals like velociraptor were alive today our first impression would be that they were just very unusual looking birds.
According to Turner and co-authors Norell and Peter Makovicky, quill knobs are not found in all prehistoric birds, and their absence does not mean that an animal was not feathered – flamingos, for example, have no quill knobs. However, their presence confirms that Velociraptor bore modern-style wing feathers, with a rachis and vane formed by barbs. The forearm specimen on which the quill knobs were found (specimen number IGM 100/981) represents an animal 1.5 meters in length (5 ft) and 15 kilograms (33 lbs) in weight. Based on the spacing of the six preserved knobs in this specimen, the authors suggested that Velociraptor bore 14 secondaries (wing feathers stemming from the forearm), compared with 12 or more in Archaeopteryx, 18 in Microraptor, and 10 in Rahonavis. This type of variation in the number of wing feathers between closely related species, the authors asserted, is to be expected, given similar variation among modern birds.
Turner and colleagues interpreted the presence of feathers on Velociraptor as evidence against the idea that the larger, flightless maniraptorans lost their feathers secondarily due to larger body size. Furthermore, they noted that quill knobs are almost never found in flightless bird species today, and that their presence in Velociraptor (presumed to have been flightless due to its relatively large size and short forelimbs) is evidence that the ancestors of dromaeosaurids could fly, making Velociraptor and other large members of this family secondarily flightless, though it is possible the large wing feathers inferred in the ancestors of Velociraptor had a purpose other than flight. The feathers of the flightless Velociraptor may have been used for display, for covering their nests while brooding, or for added speed and thrust when running up inclined slopes.
The preserved impressions of integumentary structures in Sinornithosaurus were composed of filaments, and showed two features that indicate they are early feathers. First, several filaments were joined together into "tufts", similar to the way down is structured. Second, a row of filaments (barbs) were joined together to a main shaft (rachis), making them similar in structure to normal bird feathers. However, they do not have the secondary branching and tiny little hooks (barbules) that modern feathers have, which allow the feathers of modern birds to form a discrete vane. The filaments are arranged in a parallel fashion to each other, and are perpendicular to the bones. In specimen NGMC - 91, the feathers covered the entire body, including the head in front of the eye, the neck, wing - like sprays on the arms, long feathers on the thighs, and a lozenge - shaped fan on the tail like that of Archaeopteryx.
Pedopenna is a maniraptoran theropod that shows evidence of avian affinities that are further evidence of the dinosaur-bird evolutionary relationship. Apart from having a very bird-like skeletal structure in its legs, Pedopenna was remarkable due to the presence of long pennaceous feathers on the metatarsus (foot). Some deinonychosaurs are also known to have these 'hind wings', but those of Pedopenna differ from those of animals like Microraptor. Pedopenna hind wings were smaller and more rounded in shape. The longest feathers were slightly shorter than the metatarsus, at about 55 mm (2 in) long. Additionally, the feathers of Pedopenna were symmetrical, unlike the asymmetrical feathers of some deinonychosaurs and birds. Since asymmetrical feathers are typical of animals adapted to flying, it is likely that Pedopenna represents an early stage in the development of these structures. While many of the feather impressions in the fossil are weak, it is clear that each possessed a rachis and barbs, and while the exact number of foot feathers is uncertain, they are more numerous than in the hind-wings of Microraptor. Pedopenna also shows evidence of shorter feathers overlying the long foot feathers, evidence for the presence of coverts as seen in modern birds. Since the feathers show fewer aerodynamic adaptations than the similar hind wings of Microraptor, and appear to be less stiff, suggests that if they did have some kind of aerodynamic function, it was much weaker than in deinonychosaurs and birds. Xu and Zhang, in their 2005 description of Pedopenna, suggested that the feathers could be ornamental, or even vestigial. It is possible that a hind wing was present in the ancestors of deinonychosaurs and birds, and later lost in the bird lineage, with Pedopenna representing an intermediate stage where the hind wings are being reduced from a functional gliding apparatus to a display or insulatory function.
Anchiornis is notable for its proportionally long forelimbs, which measured 80% of the total length of the hind limbs. This is similar to the condition in early avians such as Archaeopteryx, and the authors pointed out that long forelimbs are necessary for flight. It is possible that Anchiornis was able to fly or glide, and may have had a functional airfoil. Anchiornis also had a more avian wrist than other non-avian theropods. Anchiornis has hind leg proportions more like those of lower theropod dinosaurs than avialans. Faint, carbonized feather impressions were preserved in patches in the type specimen. Feathers on the torso measured an average of 20 mm in length, but the feathers were too poorly preserved to ascertain details of their structure. A cladistic analysis indicated that Anchiornis is part of the avian lineage, but outside of the clade that includes Archaeopteryx and modern birds, strongly suggesting that Anchiornis was a basal member of the Avialae and the sister taxon of Aves. Anchiornis can therefore be considered to be a non-avian avialian.
All specimens of Sinosauropteryx preserve integumentary structures (filaments arising from the skin) which most paleontologists interpret as very primitive feathers. These short, down-like filaments are preserved all along the back half of the skull, arms, neck, back, and top and bottom of the tail. Additional patches of feathers have been identified on the sides of the body, and paleontologist Chen, Dong and Zheng proposed that the density of the feathers on the back and the randomness of the patches elsewhere on the body indicated the animals would have been fully feathered in life, with the ventral feathers having been removed by decomposition.
The filaments are preserved with a gap between the bones, which several authors have noted corresponds closely to the expected amount of skin and muscle tissue that would have been present in life. The feathers adhere close to the bone on the skull and end of the tail, where little to no muscle was present, and the gap increases over the back vertebrae, where more musculature would be expected, indicating that the filaments were external to the skin and do not correspond with sub-cutaneous structures.
The random positioning of the filaments and often "wavy" lines of preservation indicate that they were soft and pliable in life. Examination with microscopes shows that each individual filament appears dark along the edges and light internally, suggesting that they were hollow, like modern feathers. Compared to modern mammals the filaments were quite coarse, with each individual strand much larger and thicker than the corresponding hairs of similarly sized mammals.
The length of the filaments varied across the body. They were shortest just in front of the eyes, with a length of 13 mm. Going further down the body, the filaments rapidly increase in length until reaching 35 mm long over the shoulder blades. The length remains uniform over the back, until beyong the hips, when the filaments lengthen again and reach their maximum length midway down the tail at 40 mm. The filaments on the underside of the tail are shorter overall and decrease in length more rapidly than those on the dorsal surface. By the 25th tail vertebrae, the filaments on the underside reach a length of only 35 mm. The longest feathers present on the forearm measured 14 mm.
Overall, the filaments most closely resemble the "plumules" or down-like feathers of some modern birds, with a very short quill and long, thin barbs. The same structures are seen in other fossils from the Yixian Formation, including Confuciusornis.
Analysis of the fossils of Sinosauropteryx have shown an alternation of lighter and darker bands preserved on the tail, giving us an idea of what the animal looked like in real life. This banding is probably due to preserved areas of melanin, which can produce dark tones in fossils.
The type specimen of Epidendrosaurus also preserved faint feather impressions at the end of the tail, similar to the pattern found in the dromaeosaurid Microraptor. While the reproductive strategies of Epidendrosaurus itself remain unknown, several tiny fossil eggs discovered in Phu Phok, Thailand (one of which contained the embryo of a theropod dinosaur) may have been laid by a small dinosaur similar to Epidendrosaurus or Microraptor. The authors who described these eggs estimated the dinosaur they belonged to would have had the adult size of a modern Goldfinch.
Scansoriopteryx fossils preserve impressions of wispy, down-like feathers around select parts of the body, forming V-shaped patterns similar to those seen in modern down feathers. The most prominent feather impressions trail from the left forearm and hand. The longer feathers in this region led Czerkas and Yuan to speculate that adult scansoriopterygids may have had reasonably well-developed wing feathers which could have aided in leaping or rudimentary gliding, though they ruled out the possibility that Scansoriopteryx could have achieved powered flight. Like other maniraptorans, Scansoriopteryx had a semilunate (half-moon shaped) bone in the wrist that allowed for bird-like folding motion in the hand. Even if powered flight was not possible, this motion could have aided maneuverability in leaping from branch to branch. Scales were also preserved near the base of the tail. For more on the implications of this discovery, see Scansoriopteryx#Implications.
Oviraptorosaurs, like dromaeosaurs, are so bird-like that several scientists consider them to be true birds, more advanced than Archaeopteryx. Gregory S. Paul has written extensively on this possibility, and Teresa Maryańska and colleagues published a technical paper detailing this idea in 2002. Michael Benton, in his widely-respected text Vertebrate Palaeontology, also included oviraptorosaurs as an order within the class Aves. However, a number of researchers have disagreed with this classification, retaining oviraptorosaurs as non-avialan maniraptorans slightly more primitive than the dromaeosaurs.
Evidence for feathered oviraptorosaurs exists in several forms. Most directly, two species of primitive oviraptorosaurs (Caudipteryx) have been found with impressions of well developed feathers, most notably on the wings and tail, suggesting that they functioned at least partially for display. Secondly, at least one oviraptorosaur (Nomingia) was preserved with a tail ending in something like a pygostyle, a bony structure at the end of the tail that, in modern birds, is used to support a fan of feathers. Similarly, quill knobs (anchor points for wing feathers on the ulna) have been reported in the oviraptorosaurian species, Avimimus portentosus. Additionally, a number of oviraptorid specimens have famously been discovered in a nesting position similar to that of modern birds. The arms of these specimens are positioned in such a way that they could perfectly cover their eggs if they had small wings and a substantial covering of feathers. Protarchaeopteryx, an oviraptorosaur, is well known for its fan-like array of 12 rectricial feathers, but it also seems to have sported simple filament-like structures elsewhere on the tail. Soft and downy feathers are preserved in the chest region and tail base, and are also preserved adjacent to the femora.
The bodies and limbs of oviraptorosaurs are arranged in a bird-like manner, suggesting the presence of feathers on the arms which may have been used for insulating eggs or brooding young. Members of Oviraptoridae possess a quadrate bone that shows particularly avian characteristics, including a pneumatizatized, double-headed structure, the presence of the pterygoid process, and articular fossa for the quadrratojugal.
Oviraptorids were probably feathered, since some close relatives were found with feathers preserved (Caudipteryx and possibly Protarchaeopteryx). Another finding pointing to this is the discovery in Nomingia of a pygostyle, a bone that results from the fusion of the last tail vertebrae and is responsible in birds to hold a fan of feathers in the tail. Finally, the arm position of the brooding Citipati would have been far more effective if feathers were present to cover the eggs.
Caudipteryx has clear and unambiguously pennaceous feathers, like modern birds, and several cladistic analyses have consistently recovered it as a nonavian, oviraptorid, dinosaur, it provided, at the time of its description, the clearest and most succinct evidence that birds evolved from dinosaurs. Lawrence Witmer stated:
- "The presence of unambiguous feathers in an unambiguously nonavian theropod has the rhetorical impact of an atomic bomb, rendering any doubt about the theropod relationships of birds ludicrous.”"
However, not all scientists agreed that Caudipteryx was unambiguously non-avian, and some of them continued to doubt that general consensus. Paleornithologist Alan Feduccia sees Caudipteryx as a flightless bird evolving from earlier archosaurian dinosaurs rather than from late theropods. Jones et al. (2000) found that Caudipteryx was a bird based on a mathematical comparison of the body proportions of flightless birds and non-avian theropods. Dyke and Norell (2005) criticized this result for flaws in their mathematical methods, and produced results of their own which supported the opposite conclusion. Other researchers not normally involved in the debate over bird origins, such as Zhou, acknowledged that the true affinities of Caudipteryx were debatable.
In 1997, filament-like integumentary structures were reported to be present in the Spanish ornithomimosaur Pelecanimimus polyodon. Furthermore, one published life restoration depicts Pelecanimimus as having been covered in the same sort of quill-like structures as are present on Sinosauropteryx and Dilong. However, a brief 1997 report that described soft-tissue mineralization in the Pelecanimimus holotype has been taken by most workers as the definitive last word 'demonstrating' that integumentary fibers were absent from this taxon.
However, the report describing soft-tissue mineralization described soft-tissue preservation seen in one small patch of tissue, and the absence of integument here does not provide much information about the distribution of integument on the live animal. This might explain why a few theropod workers (notably Paul Sereno and Kevin Padian) have continued to indicate the presence of filamentous integumentary structures in Pelecanimimus. Feduccia et al. (2005) argued that Pelecanimimus possessed scaly arms and figured some unusual rhomboidal structures in an effort to demonstrate this. The objects that they illustrate do not resemble scales and it remains to be seen whether they are anything to do with the integument of this dinosaur. A full description/monograph on this dinosaur has yet to be published, which might have more information on this subject.
Ornithischian integumentary structuresEdit
The integument, or body covering, of Psittacosaurus is known from a Chinese specimen, which most likely comes from the Yixian Formation of Liaoning. The specimen, which is not yet assigned to any particular species, was illegally exported from China, in violation of Chinese law, but was purchased by a German museum and arrangements are being made to return the specimen to China.
Most of the body was covered in scales. Larger scales were arranged in irregular patterns, with numerous smaller scales occupying the spaces between them, similarly to skin impressions known from other ceratopsians, such as Chasmosaurus. However, a series of what appear to be hollow, tubular bristles, approximately 16 centimeters (6.4 in) long, were also preserved, arranged in a row down the dorsal (upper) surface of the tail. However, according to Mayr et al., "[a]t present, there is no convincing evidence which shows these structures to be homologous to the structurally different [feathers and protofeathers] of theropod dinosaurs." As the structures are only found in a single row on the tail, it is unlikely that they were used for thermoregulation, but they may have been useful for communication through some sort of display.
Tianyulong is notable for the row of long, filamentous integumentary structures apparent on the back, tail and neck of the fossil. The similarity of these structures with those found on some derived theropods suggests their homology with feathers and raises the possibility that the earliest dinosaurs and their ancestors were covered with analogous dermal filamentous structures that can be considered as primitive feathers (proto-feathers).
The filamentous integumentary structures are preserved on three areas of the fossil: in one patch just below the neck, another one on the back, and the largest one above the tail. The hollow filaments are parallel to each other and are singular with no evidence of branching. They also appear to be relatively rigid, making them more analogous to the integumentary structures found on the tail of Psittacosaurus than to the proto-feather structures found in avian and non-avian theropods. Among the theropods, the structures in Tianyulong are most similar to the singular unbranched proto-feathers of Sinosauropteryx and Beipiaosaurus. The estimated length of the integumentary structures on the tail is about 60 mm which is seven times the height of a caudal vertebra. Their length and hollow nature argue against of them being subdermal structures such as collagen fibers.
Phylogenetics and homologyEdit
Such dermal structures have previously been reported only in derived theropods and ornithischians, and their discovery in Tianyulong extends the existence of such structures further down in the phylogenetic tree. However, the homology between the ornithischian filaments and the theropods proto-feathers is not obvious. If the homology is supported, the consequence is that the common ancestor of both saurischians and ornithischians were covered by feather-like structures and that groups for which skin impression are known such as the sauropods were only secondarily featherless. If the homology is not supported, it would indicate that these filamentous dermal structures evolved independently in saurischians and ornithischians, as well as in other archosaurs such as the pterosaurs. The authors (in supplementary information to their primary article) noted that discovery of similar filamentous structures in the theropod Beipiaosaurus bolstered the idea that the structures on Tianyulong are homologous with feathers. Both the filaments of Tianyulong and the filaments of Beipiaosaurus were laong, singular, and unbranched. In Beipiaosaurus, however, the filaments were flattened. In Tianyulong, the filaments were round in cross section, and therefore closer in structure to the earliest forms of feathers predicted by developmental models.
Some scientists have argued that other dinosaur proto-feathers are actually fibers of collagen that have come loose from the animals' skins. However, collagen fibers are solid structures; based on the long, hollow nature of the filaments on Tianyulong the authors rejected this explanation.
After a century of hypotheses without conclusive evidence, especially well-preserved (and legitimate) fossils of feathered dinosaurs were discovered during the 1990s, and more continue to be found. The fossils were preserved in a lagerstätte — a sedimentary deposit exhibiting remarkable richness and completeness in its fossils — in Liaoning, China. The area had repeatedly been smothered in volcanic ash produced by eruptions in Inner Mongolia 124 million years ago, during the Early Cretaceous Period. The fine-grained ash preserved the living organisms that it buried in fine detail. The area was teeming with life, with millions of leaves, angiosperms (the oldest known), insects, fish, frogs, salamanders, mammals, turtles, lizards and crocodilians discovered to date.
The most important discoveries at Liaoning have been a host of feathered dinosaur fossils, with a steady stream of new finds filling in the picture of the dinosaur-bird connection and adding more to theories of the evolutionary development of feathers and flight. Norell et al (2007) reported quill knobs from an ulna of Velociraptor mongoliensis, and these are strongly correlated with large and well-developed secondary feathers.
List of dinosaur genera preserved with evidence of feathersEdit
A number of non-avian dinosaurs are now known to have been feathered. Direct evidence of feathers exists for the following genera, listed in the order currently accepted evidence was first published. In all examples, the evidence described consists of feather impressions, except those marked with an asterisk (*), which denotes genera known to have had feathers based on skeletal or chemical evidence, such as the presence of quill knobs.
- Avimimus* (1987):536
- Sinosauropteryx (1996)
- Protarchaeopteryx (1997)
- Caudipteryx (1998)
- Rahonavis* (1998)
- Shuvuuia (1999)
- Sinornithosaurus (1999)
- Beipiaosaurus (1999)
- Microraptor (2000)
- Nomingia* (2000)
- Cryptovolans (2002)
- Scansoriopteryx (2002)
- Epidendrosaurus (2002)
- Psittacosaurus? (2002)
- Yixianosaurus (2003)
- Dilong (2004)
- Pedopenna (2005)
- Jinfengopteryx (2005)
- Sinocalliopteryx (2007)
- Velociraptor* (2007)
- Epidexipteryx (2008)
- Anchiornis (2009)
- Tianyulong? (2009)
- Note, filamentous structures in some ornithischian dinosaurs (Psittacosaurus, Tianyulong) and pterosaurs may or may not be homologous with the feathers and protofeathers of theropods.
Phylogeny and the inference of feathers in other dinosaursEdit
Feathered dinosaur fossil finds to date, together with cladistic analysis, suggest that many types of theropod may have had feathers, not just those that are especially similar to birds. In particular, the smaller theropod species may all have had feathers and possibly even the larger theropods (for instance T. rex) may have had feathers, in their early stages of development after hatching. Whereas these smaller animals may have benefited from the insulation of feathers, large adult theropods are unlikely to have had feathers, since inertial heat retention would likely be sufficient to manage heat. Excess internal heat may even have become a problem, had these very large creatures been feathered.
Fossil feather impressions are extremely rare; therefore only a few feathered dinosaurs have been identified so far. However, through a process called phylogenetic bracketing, scientists can infer the presence of feathers on poorly-preserved specimens. All fossil feather specimens have been found to show certain similarities. Due to these similarities and through developmental research almost all scientists agree that feathers could only have evolved once in dinosaurs. Feathers would then have been passed down to all later, more derived species (although it is possible that some lineages lost feathers secondarily). If a dinosaur falls at a point on an evolutionary tree within the known feather-bearing lineages, scientists assume it too had feathers, unless conflicting evidence is found. This technique can also be used to infer the type of feathers a species may have had, since the developmental history of feathers is now reasonably well-known.
Nearly all paleontologists regard birds as coelurosaurian theropod dinosaurs. Within Coelurosauria, multiple cladistic analyses have found support for a clade named Maniraptora, consisting of therizinosauroids, oviraptorosaurs, troodontids, dromaeosaurids, and birds. Of these, dromaeosaurids and troodontids are usually united in the clade Deinonychosauria, which is a sister group to birds (together forming the node-clade Eumaniraptora) within the stem-clade Paraves.
Other studies have proposed alternative phylogenies in which certain groups of dinosaurs that are usually considered non-avian are suggested to have evolved from avian ancestors. For example, a 2002 analysis found oviraptorosaurs to be basal avians. Alvarezsaurids, known from Asia and the Americas, have been variously classified as basal maniraptorans, paravians, the sister taxon of ornithomimosaurs, as well as specialized early birds. The genus Rahonavis, originally described as an early bird, has been identified as a non-avian dromaeosaurid in several studies. Dromaeosaurids and troodontids themselves have also been suggested to lie within Aves rather than just outside it.:472
The scientists who described the (apparently unfeathered) Juravenator performed a genealogical study of coelurosaurs, including distribution of various feather types. Based on the placement of feathered species in relation to those that have not been found with any type of skin impressions, they were able to infer the presence of feathers in certain dinosaur groups. The following simplified cladogram follows these results, and shows the likely distribution of plumaceous (downy) and pennaceous (vaned) feathers among theropods. Note that the authors inferred pennaceous feathers for Velociraptor based on phylogenetic bracketing, a prediction later confirmed by fossil evidence.
- Origin of birds
- Evolution of birds
- Origin of avian flight
- Birds Came First
- Alan Feduccia
- George Olshevsky
- ^ All known dromaeosaurs have pennaceous feathers on the arms and tail, and substantially thick coat of feathers on the body, especially the neck and breast. Clear fossil evidence of modern avian-style feathers exists for several related dromaeosaurids, including Velociraptor and Microraptor, though no direct evidence is yet known for Deinonychus itself.
- ^ On page 155 of Dinosaurs of the Air by Gregory Paul, there are an accumulated total of 305 potential synapomorphies with birds for all non-avian theropod nodes, 347 for all non-avian dinosauromorph nodes.
Shared features between birds and dinosaurs include:
- A pubis (one of the three bones making up the vertebrate pelvis) shifted from an anterior to a more posterior orientation (see Saurischia), and bearing a small distal "boot".
- Elongated arms and forelimbs and clawed manus (hands).
- Large orbits (eye openings in the skull).
- Flexible wrist with a semi-lunate carpal (wrist bone).
- Double-condyled dorsal joint on the quadrate bone.
- Ossified ucinate process of the ribs.
- Most of the sternum is ossified.
- Broad sternal plates.
- Ossified sternal ribs.
- Brain enlarged above reptilian maximum.
- Overlapping field of vision.
- Olfaction sense reduced.
- An arm/leg length ratio between 0.5 and 1.0
- Lateral exposition of the glenoid in the humeral joint.
- Hollow, thin-walled bones.
- 3-fingered opposable grasping manus (hand), 4-toed pes (foot); but supported by 3 main toes.
- Fused carpometacarpus.
- Metacarpal III bowed posterolaterally.
- Flexibilty of digit III reduced.
- Digit III tightly appressed to digit II.
- Well developed arm folding mechanism.
- Reduced, posteriorly stiffened tail.
- Distal tail stiffened.
- Tail base hyperflexible, especially dorsally.
- Elongated metatarsals (bones of the feet between the ankle and toes).
- S-shaped curved neck.
- Erect, digitgrade (ankle held well off the ground) stance with feet postitioned directly below the body.
- Similar eggshell microstructure.
- Teeth with a constriction between the root and the crown.
- Functional basis for wing power stroke present in arms and pectoral girdle (during motion, the arms were swung down and forward, then up and backwards, describing a "figure-eight" when viewed laterally).
- Expanded pneumatic sinuses in the skull.
- Five or more vertebrae incorporated into the sacrum (hip).
- Posterior caudal vertebrate fused to form the pygostyle.
- Large, strongly built, and straplike scapula (shoulder blade).
- Scapula blades are horizontal.
- Scapula tip is pointed.
- Acromion process is developed, similar to that in Archaeopteryx.
- Retroverted and long coracoids.
- Strongly flexed and subvertical coracoids relative to the scapula.
- Clavicles (collarbone) fused to form a furcula (wishbone).
- U-shaped furcula.
- Hingelike ankle joint, with movement mostly restricted to the fore-aft plane.
- Secondary bony palate (nostrils open posteriorly in throat).
- Pennaceous feathers in some taxa. Proto-feathers, filaments, and integumenatary structures in others.
- Well-developed, symmetrical arm contour feathers.
- Source 1: Are Birds Really Dinosaurs? Dinobuzz, Current Topics Concerning Dinosaurs. Created 9/27/05. Accessed 7/20/09. Copyright 1994-2009 by the Regents of the University of California, all rights reserved.
- Source 2: Kurochkin, E., N. 2006. Parallel Evolution of Theropod Dinoaurs and Birds. Entomological Review 86 (1), pp. S45-S58. doi:10.1134/S0013873806100046
- Source 3: Paul, Gregory S. (2002). "11". Dinosaurs of the Air: The Evolution and Loss of Flight in Dinosaurs and Birds. Baltimore: Johns Hopkins University Press. pp. 225-227: Table 11.1. ISBN 978-0801867637.
- ^ Xu Xing suggested that the integumentary features present in some pterosaurs and the ornithischian dinosaur Psittacosaurus may be evidence of this first stage.
- ^ Examples in the fossil record may include Sinosauropteryx, Beipiaosaurus, Dilong, and Sinornithosaurus.
- ^ According to Xu Xing, the stage III is supported by the fact that feather follicles developed after barb ridges, along with the follicle having a unique role in the formation of the rachis.
- ^ See Caudipteryx, Protarchaeopteryx, and Sinornithosaurus.
Xu Xing also noted that while the pennaceous feathers of Microraptor differ from those of Caudipteryx and Protarchaeopteryx due to the aerodynamic functions of its feathers, they still belong together in the same stage because they both "evolved form-stiffening barbules" on their feathers.
- ^ Remiges are the large feathers of the forelimbs (singular remex). The large feathers that grow from the tail are termed rectrices (singular rectrix).
- ^ Darwin, Charles R. (1859). On the Origin of Species by Means of Natural Selection, or the Preservation of Favoured Races in the Struggle for Life. London: John Murray. p. 502pp. http://darwin-online.org.uk/content/frameset?itemID=F373&viewtype=side&pageseq=16.
- ^ Huxley, Thomas H. (1870). "Further evidence of the affinity between the dinosaurian reptiles and birds". Quarterly Journal of the Geological Society of London 26: 12–31.
- ^ Huxley, Thomas H. (1868). "On the animals which are most nearly intermediate between birds and reptiles". Annals of the Magazine of Natural History 4 (2): 66–75.
- ^ Foster, Michael; Lankester, E. Ray 1898–1903. The scientific memoirs of Thomas Henry Huxley. 4 vols and supplement. London: Macmillan.
- ^ Owen, R. (1863): On the Archaeopteryx of von Meyer, with a description of the fossil remains of a long-tailed species, from the Lithographic Slate of Solenhofen. - Philosophical Transactions of the Royal Society of London, 1863: 33-47. London.
- ^ a b Padian K. and Chiappe LM (1998). The origin and early evolution of birds. Biological Reviews 73: 1-42.
- ^ a b c d e f g h i Xu Xing; Zhou Zhonghe; Wang Xiaolin; Kuang Xuewen; Zhang Fucheng; & Du Xiangke (2003). "Four-winged dinosaurs from China". Nature 421 (6921): 335–340. doi:10.1038/nature01342.
- ^ a b c d Zhang, F., Zhou, Z., Xu, X. & Wang, X. (2002). "A juvenile coelurosaurian theropod from China indicates arboreal habits." Naturwissenschaften, 89(9): 394-398. doi:10.1007 /s00114-002-0353-8.
- ^ Fox, W. (1866). Another new Wealden reptile. Athenaeum 2014, 740.
- ^ Naish, D. (2002). The historical taxonomy of the Lower Cretaceous theropods (Dinosauria) Calamospondylus and Aristosuchus from the Isle of Wight. Proceedings of the Geologists' Association 113, 153-163.
- ^ Swinton, W. E. (1936a). Notes on the osteology of Hypsilophodon, and on the family Hypsilophodontidae. Proceedings of the Zoological Society of London 1936, 555-578.
- ^ Swinton, W. E. (1936b). The dinosaurs of the Isle of Wight. Proceedings of the Geologists' Association 47, 204-220.
- ^ Galton, P. M. (1971a). Hypsilophodon, the cursorial non-arboreal dinosaur. Nature 231, 159-161.
- ^ Galton, P. M. (1971b). The mode of life of Hypsilophodon, the supposedly arboreal ornithopod dinosaur. Lethaia 4, 453-465.
- ^ a b Paul, G.S. (1988). Predatory Dinosaurs of the World. New York: Simon & Schuster.
- ^ a b Olshevsky, G. (2001a). The birds came first: a scenario for avian origins and early evolution, 1. Dino Press 4, 109-117.
- ^ a b Olshevsky, G. (2001b). The birds came first: a scenario for avian origins and early evolution. Dino Press 5, 106-112.
- ^ a b Ostrom, John H. (1969). "Osteology of Deinonychus antirrhopus, an unusual theropod from the Lower Cretaceous of Montana". Bulletin of the Peabody Museum of Natural History 30: 1–165.
- ^ Paul, Gregory S. (2000). "A Quick History of Dinosaur Art". in Paul, Gregory S. (ed.). The Scientific American Book of Dinosaurs. New York: St. Martin's Press. pp. 107–112. ISBN 0-312-26226-4.
- ^ El Pais: El 'escándalo archaeoraptor' José Luis Sanz y Francisco Ortega 16/02/2000 Online, Spanish
- ^ a b Swisher Iii, C.C.; Wang, Y.Q.; Wang, X.L.; Xu, X.; Wang, Y. (2001), "Cretaceous age for the feathered dinosaurs of Liaoning, China", Rise of the Dragon: Readings from Nature on the Chinese Fossil Record: 167, http://books.google.com/books?hl=en, retrieved on 2009-09-02
- ^ a b Swisher, C.C.; Xiaolin, W.; Zhonghe, Z.; Yuanqing, W.; Fan, J.I.N.; Jiangyong, Z.; Xing, X.U.; Fucheng, Z.; et al. (2002), "Further support for a Cretaceous age for the feathered-dinosaur beds of Liaoning, China: %u2026", Chinese Science Bulletin 47 (2): 136–139, http://www.springerlink.com/index/W7724740N2320M80.pdf, retrieved on 2009-09-02
- ^ Sereno, Paul C.; & Rao Chenggang (1992). "Early evolution of avian flight and perching: new evidence from the Lower Cretaceous of China". Science 255 (5046): 845–848. doi:10.1126/science.255.5046.845. PMID 17756432.
- ^ Hou Lian-Hai; Zhou Zhonghe; Martin, Larry D.; & Feduccia, Alan (1995). "A beaked bird from the Jurassic of China". Nature 377 (6550): 616–618. doi:10.1038/377616a0.
- ^ Novas, F. E., Puerta, P. F. (1997). New evidence concerning avian origins from the Late Cretaceous of Patagonia. Nature 387:390-392.
- ^ Norell, M. A., Clark, J. M., Makovivky, P. J. (2001). Phylogenetic relationships among coelurosaurian dinosaurs. In: Gauthier, J. A., Gall, L. F., eds. New Perspectives on the Origin and Early Evolution of Birds. Yale University Press, New Haven, pp. 49-67.
- ^ Gatesy, S. M., Dial, K. P. (1996). Locomotor modules and the evolution of avian flight. Evolution 50:331-340.
- ^ Gatesy, S. M. (2001). The evolutionary history of the theropod caudal locomotor module. In: Gauthier, J. A., Gall, L. F., eds. New Perspectives on the Origin and Early Evolution of Birds. Yale University Press, New Haven, pp. 333-350.
- ^ Xu, X. (2002). Deinonychosaurian fossils from the Jehol Group of western Liaoning and the coelurosaurian evolution (Dissertation). Chinese Academy of Sciences, Beijing.
- ^ a b c d e f g h i j k l m n o p q Xu Xing (2006). Feathered dinosaurs from China and the evolution of major avian characters. Integrative Zoology 1:4-11. doi:10.1111/j.1749-4877.2006.00004.x
- ^ a b Ji Qiang; & Ji Shu-an (1996). "On the discovery of the earliest bird fossil in China and the origin of birds". Chinese Geology 233: 30–33.
- ^ a b c d e f g h i Chen Pei-ji; Dong Zhiming; & Zhen Shuo-nan. (1998). "An exceptionally preserved theropod dinosaur from the Yixian Formation of China". Nature 391 (6663): 147–152. doi:10.1038/34356.
- ^ a b Lingham-Soliar, Theagarten; Feduccia, Alan; & Wang Xiaolin. (2007). "A new Chinese specimen indicates that ‘protofeathers’ in the Early Cretaceous theropod dinosaur Sinosauropteryx are degraded collagen fibres". Proceedings of the Royal Society B: Biological Sciences 274 (1620): 1823–1829. doi:10.1098/rspb.2007.0352.
- ^ a b c d e f Ji Qiang; Currie, Philip J.; Norell, Mark A.; & Ji Shu-an. (1998). "Two feathered dinosaurs from northeastern China". Nature 393 (6687): 753–761. doi:10.1038/31635.
- ^ Sloan, Christopher P. (1999). "Feathers for T. rex?". National Geographic 196 (5): 98–107.
- ^ Monastersky, Richard (2000). "All mixed up over birds and dinosaurs". Science News 157 (3): 38. doi:10.2307/4012298. http://www.sciencenews.org/view/generic/id/94/title/All_mixed_up_over_birds_and_dinosaurs.
- ^ a b c d e Xu Xing; Tang Zhi-lu; & Wang Xiaolin. (1999). "A therizinosaurid dinosaur with integumentary structures from China". Nature 399 (6734): 350–354. doi:10.1038/20670.
- ^ a b c d e f g Xu, X., Norell, M. A., Kuang, X., Wang, X., Zhao, Q., Jia, C. (2004). "Basal tyrannosauroids from China and evidence for protofeathers in tyrannosauroids". Nature 431: 680–684. doi:10.1038/nature02855.
- ^ Zhou Zhonghe; & Zhang Fucheng (2002). "A long-tailed, seed-eating bird from the Early Cretaceous of China". Nature 418 (6896): 405–409. doi:10.1038/nature00930.
- ^ Wellnhofer, P. (1988). Ein neuer Exemplar von Archaeopteryx. Archaeopteryx 6:1–30.
- ^ a b c Zhou Zhonghe; Barrett, Paul M.; & Hilton, Jason. (2003). "An exceptionally preserved Lower Cretaceous ecosystem". Nature 421 (6925): 807–814. doi:10.1038/nature01420.
- ^ a b c d Feduccia, A., Lingham-Soliar, T. & Hinchliffe, J. R. (2005). Do feathered dinosaurs exist? Testing the hypothesis on neontological and paleontological evidence. Journal of Morphology 266, 125-166. doi:10.1002/jmor.10382
- ^ a b c Czerkas, S.A., Zhang, D., Li, J., and Li, Y. (2002). "Flying Dromaeosaurs". in Czerkas, S.J.. Feathered Dinosaurs and the Origin of Flight: The Dinosaur Museum Journal 1. Blanding: The Dinosaur Museum. pp. 16–26.
- ^ a b Norell, Mark, Ji, Qiang, Gao, Keqin, Yuan, Chongxi, Zhao, Yibin, Wang, Lixia. (2002). "'Modern' feathers on a non-avian dinosaur". Nature, 416: pp. 36. 7 March 2002.
- ^ a b c d e f g h Paul, Gregory S. (2002). Dinosaurs of the Air: The Evolution and Loss of Flight in Dinosaurs and Birds. Baltimore: Johns Hopkins University Press. ISBN 978-0801867637.
- ^ Heilmann, G. (1926): The Origin of Birds. Witherby, London. ISBN 0-486-22784-7 (1972 Dover reprint)
- ^ John Ostrom (1975). The origin of birds. Annual Review of Earth and Planetary Sciences 3, pp. 55.
- ^ Bryant, H.N. & Russell, A.P. (1993) The occurrence of clavicles within Dinosauria: implications for the homology of the avian furcula and the utility of negative evidence. Journal of Vertebrate Paleontology, 13(2):171-184.
- ^ Chure, Daniel J.; & Madsen, James H. (1996). "On the presence of furculae in some non-maniraptoran theropods". Journal of Vertebrate Paleontology 16 (3): 573–577.
- ^ Norell, Mark A.; & Makovicky, Peter J. (1999). "Important features of the dromaeosaurid skeleton II: Information from newly collected specimens of Velociraptor mongoliensis". American Museum Novitates 3282: 1–44. http://hdl.handle.net/2246/3025.
- ^ Colbert, E. H. & Morales, M. (1991) Evolution of the vertebrates: a history of the backboned animals through time. 4th ed. Wiley-Liss, New York. 470 p.
- ^ Barsbold, R. et al. (1990) Oviraptorosauria. In The Dinosauria, Weishampel, Dodson &p; Osmolska (eds) pp 249-258.
- ^ Included as a cladistic definer, e.g. (Columbia University) Master Cladograms or mentioned even in the broadest context, such as Paul C. Sereno, "The origin and evolution of dinosaurs" Annual Review of Earth and Planetary Sciences 25 pp 435-489.
- ^ Lipkin, C., Sereno, P.C., and Horner, J.R. (November 2007). "THE FURCULA IN SUCHOMIMUS TENERENSIS AND TYRANNOSAURUS REX (DINOSAURIA: THEROPODA: TETANURAE)". Journal of Paleontology 81 (6): 1523–1527. doi:10.1666/06-024.1. http://jpaleontol.geoscienceworld.org/cgi/content/extract/81/6/1523. - full text currently online at "The Furcula in Suchomimus Tenerensis and Tyrannosaurus rex". http://www.redorbit.com/news/health/1139122/the_furcula_in_suchomimus_tenerensis_and_tyrannosaurus_rex_dinosauria_theropoda/index.html. This lists a large number of theropods in which furculae have been found, as well as describing those of Suchomimus Tenerensis and Tyrannosaurus rex.
- ^ Carrano, M,R., Hutchinson, J.R., and Sampson, S.D. (December 2005). "New information on Segisaurus halli, a small theropod dinosaur from the Early Jurassic of Arizona". Journal of Vertebrate Paleontology 25 (4): 835–849. doi:10.1671/0272-4634(2005)025[0835:NIOSHA]2.0.CO;2. http://www.rvc.ac.uk/AboutUs/Staff/jhutchinson/documents/JH18.pdf.
- ^ Yates, Adam M.; and Vasconcelos, Cecilio C. (2005). "Furcula-like clavicles in the prosauropod dinosaur Massospondylus". Journal of Vertebrate Paleontology 25 (2): 466–468. doi:10.1671/0272-4634(2005)025[0466:FCITPD]2.0.CO;2.
- ^ Downs, A. (2000). Coelophysis bauri and Syntarsus rhodesiensis compared, with comments on the preparation and preservation of fossils from the Ghost Ranch Coelophysis Quarry. New Mexico Museum of Natural History and Science Bulletin, vol. 17, pp. 33–37.
- ^ The furcula of Coelophysis bauri, a Late Triassic (Apachean) dinosaur (Theropoda: Ceratosauria) from New Mexico. 2006. By Larry Rinehart, Spencer Lucas, and Adrian Hunt
- ^ a b Ronald S. Tykoski, Catherine A. Forster, Timothy Rowe, Scott D. Sampson, and Darlington Munyikwad. (2002). A furcula in the coelophysid theropod Syntarsus. Journal of Vertebrate Paleontology 22(3):728-733.
- ^ Larry F. Rinehart, Spencer G. Lucas, Adrian P. Hunt. (2007). Furculae in the Late Triassic theropod dinosaur Coelophysis bauri. Paläontologische Zeitschrift 81: 2
- ^ a b Sereno, P.C.; Martinez, R.N.; Wilson, J.A.; Varricchio, D.J.; Alcober, O.A.; and Larsson, H.C.E. (September 2008). "Evidence for Avian Intrathoracic Air Sacs in a New Predatory Dinosaur from Argentina". PLoS ONE 3 (9): e3303. doi:10.1371/journal.pone.0003303. http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0003303. Retrieved on 2008-10-27.
- ^ O'Connor, P.M. & Claessens, L.P.A.M. (2005). "Basic avian pulmonary design and flow-through ventilation in non-avian theropod dinosaurs". Nature 436: 253–256. doi:10.1038/nature03716.
- ^ Meat-Eating Dinosaur from Argentina Had Bird-Like Breathing System Newswise, Retrieved on September 29, 2008.
- ^ Fisher, P. E., Russell, D. A., Stoskopf, M. K., Barrick, R. E., Hammer, M. & Kuzmitz, A. A. (2000). Cardiovascular evidence for an intermediate or higher metabolic rate in an ornithischian dinosaur. Science 288, 503–505.
- ^ Hillenius, W. J. & Ruben, J. A. (2004). The evolution of endothermy in terrestrial vertebrates: Who? when? why? Physiological and Biochemical Zoology 77, 1019–1042.
- ^ Dinosaur with a Heart of Stone. T. Rowe, E. F. McBride, P. C. Sereno, D. A. Russell, P. E. Fisher, R. E. Barrick, and M. K. Stoskopf (2001) Science 291, 783
- ^ a b Xu, X. and Norell, M.A. (2004). A new troodontid dinosaur from China with avian-like sleeping posture. Nature 431:838-841.See commentary on the article.
- ^ Schweitzer, M.H.; Wittmeyer, J.L.; and Horner, J.R. (2005). "Gender-specific reproductive tissue in ratites and Tyrannosaurus rex". Science 308: 1456–1460. doi:10.1126/science.1112158. PMID 15933198. http://www.sciencemag.org/cgi/content/abstract/308/5727/1456.
- ^ Lee, Andrew H.; and Werning, Sarah (2008). "Sexual maturity in growing dinosaurs does not fit reptilian growth models". Proceedings of the National Academy of Sciences 105 (2): 582–587. doi:10.1073/pnas.0708903105. PMID 18195356. http://www.pnas.org/cgi/content/abstract/105/2/582.
- ^ Chinsamy, A., Hillenius, W.J. 2004). Physiology of nonavian dinosaurs. In:Weishampel, D.B., Dodson, P., Osmolska, H., eds. The Dinosauria. University of California Press, Berkely. pp. 643-65.
- ^ Norell, M.A., Clark, J.M., Chiappe, L.M., and Dashzeveg, D. (1995). "A nesting dinosaur." Nature 378:774-776.
- ^ a b Clark, J.M., Norell, M.A., & Chiappe, L.M. (1999). "An oviraptorid skeleton from the Late Cretaceous of Ukhaa Tolgod, Mongolia, preserved in an avianlike brooding position over an oviraptorid nest." American Museum Novitates, 3265: 36 pp., 15 figs.; (American Museum of Natural History) New York. (5.4.1999).
- ^ Norell, M. A., Clark, J. M., Dashzeveg, D., Barsbold, T., Chiappe, L. M., Davidson, A. R., McKenna, M. C. and Novacek, M. J. (November 1994). "A theropod dinosaur embryo and the affinities of the Flaming Cliffs Dinosaur eggs" (abstract page). Science 266 (5186): 779–782. doi:10.1126/science.266.5186.779. PMID 17730398. http://www.sciencemag.org/cgi/content/abstract/266/5186/779.
- ^ Oviraptor nesting Oviraptor nests or Protoceratops?
- ^ Gregory Paul (1994). Thermal environments of dinosaur nestlings: Implications for endothermy and insulation. In: Dinosaur Eggs and Babies.
- ^ Hombergerm D.G. (2002). The aerodynamically streamlined body shape of birds: Implications for the evolution of birds, feathers, and avian flight. In: Zhou, Z., Zhang, F., eds. Proceedings of the 5th symposium of the Society of Avian Paleontology and Evolution, Beijing, 1-4 June 2000. Beijing, China: Science Press. p. 227-252.
- ^ a b c Ji, Q., and Ji, S. (1997). "A Chinese archaeopterygian, Protarchaeopteryx gen. nov." Geological Science and Technology (Di Zhi Ke Ji), 238: 38-41. Translated By Will Downs Bilby Research Center Northern Arizona University January, 2001
- ^ a b c d e Xu, X., Zhou, Z., and Wang, X. (2000). "The smallest known non-avian theropod dinosaur." Nature, 408 (December): 705-708.
- ^ Dal Sasso, C. and Signore, M. (1998). Exceptional soft-tissue preservation in a theropod dinosaur from Italy. Nature 292:383–387. See commentary on the article
- ^ Mary H. Schweitzer, Jennifer L. Wittmeyer, John R. Horner, and Jan K. Toporski (2005). Science 307 (5717) pp. 1952-1955. doi:10.1126/science.1108397
- ^ Schweitzer, M.H., Wittmeyer, J.L. and Horner, J.R. (2005). Soft-Tissue Vessels and Cellular Preservation in Tyrannosaurus rex. Science 307:1952–1955. See commentary on the article
- ^ Wang, H., Yan, Z. and Jin, D. (1997). Reanalysis of published DNA sequence amplified from Cretaceous dinosaur egg fossil. Molecular Biology and Evolution. 14:589–591. See commentary on the article.
- ^ Chang, B.S.W., Jönsson, K., Kazmi, M.A., Donoghue, M.J. and Sakmar, T.P. (2002). Recreating a Functional Ancestral Archosaur Visual Pigment. Molecular Biology and Evolution 19:1483–1489. See commentary on the article.
- ^ Embery, et al. "Identification of proteinaceous material in the bone of the dinosaur Iguanodon." Connect Tissue Res. 2003; 44 Suppl 1:41-6. PMID: 12952172
- ^ Schweitzer, et al. (1997 Jun 10) "Heme compounds in dinosaur trabecular bone." Proc Natl Acad Sci U S A.. 94(12):6291–6. PMID: 9177210
- ^ Fucheng, Z., Zhonghe, Z., and Dyke, G. (2006). Feathers and 'feather-like' integumentary structures in Liaoning birds and dinosaurs. Geol . J. 41:395-404.
- ^ a b Cheng-Ming Choung, Ping Wu, Fu-Cheng Zhang, Xing Xu, Minke Yu, Randall B. Widelitz, Ting-Xin Jiang, and Lianhai Hou (2003). Adaptation to the sky: defining the feather with integument fossils from the Mesozoic China and exprimental evidence from molecular laboratories. Journal of Experimental Zoology (MOL DEV EVOL) 298b:42-56.
- ^ Bakker, R.T., Galton, P.M. (1974). Dinosaur monophyly and a new class of vertebrates. Nature 248:168-172.
- ^ Sumida, SS & CA Brochu (2000). "Phylogenetic context for the origin of feathers". American Zoologist 40 (4): 486–503. doi:10.1093/icb/40.4.486. http://icb.oxfordjournals.org/cgi/content/abstract/40/4/486.
- ^ a b c d Chiappe, Luis M., (2009). Downsized Dinosaurs:The Evolutionary Transition to Modern Birds. Evo Edu Outreach 2: 248-256. doi:10.1007/s12052-009-0133-4
- ^ Burgers, P., Chiappe, L.M. (1999). The wing of Archaeopteryx as a primary thrust generator. Nature 399: 60-2. doi:10.1038/19967
- ^ a b c d e f g h Prum, R. & Brush A.H. (2002). "The evolutionary origin and diversification of feathers". The Quarterly Review of Biology 77: 261–295. doi:10.1086/341993.
- ^ a b c d Prum, R. H. (1999). Development and evolutionary origin of feathers. Journal of Experimental Zoology 285, 291-306.
- ^ Griffiths, P. J. (2000). The evolution of feathers from dinosaur hair. Gaia 15, 399-403.
- ^ a b c d e f g Mayr, G. Peters, S.D. Plodowski, G. Vogel, O. (2002). "Bristle-like integumentary structures at the tail of the horned dinosaur Psittacosaurus". Naturwissenschaften 89: 361–365. doi:10.1007/s00114-002-0339-6.
- ^ a b c Schweitzer, Mary Higby, Watt, J.A., Avci, R., Knapp, L., Chiappe, L, Norell, Mark A., Marshall, M. (1999). "Beta-Keratin Specific Immunological reactivity in Feather-Like Structures of the Cretaceous Alvarezsaurid, Shuvuuia deserti Journal of Experimental Zoology Part B (Mol Dev Evol) 285:146-157
- ^ Schweitzer, M. H. (2001). Evolutionary implications of possible protofeather structures associated with a specimen of Shuvuuia deserti. In Gauthier, J. & Gall, L. F. (eds) New prespectives on the origin and early evolution of birds: proceedings of the international symposium in honor of John H. Ostrom. Peabody Museum of Natural History, Yale University (New Haven), pp. 181-192.
- ^ Christiansen, P. & Bonde, N. (2004). Body plumage in Archaeopteryx: a review, and new evidence from the Berlin specimen. C. R. Palevol 3, 99-118.
- ^ M.J. Benton, M.A. Wills, R. Hitchin. (2000). Quality of the fossil record through time. Nature 403, 534-537. doi:10.1038/35000558
- ^ Morgan, James (2008-10-22). "New feathered dinosaur discovered". BBC. http://news.bbc.co.uk/2/hi/science/nature/7684796.stm. Retrieved on 2009-07-02.
- ^ a b c d e f Zhang, F., Zhou, Z., Xu, X., Wang, X., & Sullivan, C. (2008). "A bizarre Jurassic maniraptoran from China with elongate ribbon-like feathers." Available from Nature Precedings, doi:10.1038/npre.2008.2326.1 .
- ^ Prum, R,. O. & Brush, A. H. (2003). Which came first, the feather or the bird? Scientific American 286 (3), 84-93.
- ^ Epidexipteryx: bizarre little strap-feathered maniraptoran ScienceBlogs Tetrapod Zoology article by Darren Naish. October 23, 2008
- ^ Gishlick, A. D. (2001). The function of the manus and forelimb of Deinonychus antirrhopus and its importance for the origin of avian flight. In Gauthier, J. & Gall, L. F. (eds) New Perspectives on the Origin and Early Evolution of Birds: Proceedings of the International Symposium in Honor of John H. Ostrom. Peabody Museum of Natural History, Yale University (New Haven), pp. 301-318.
- ^ Senter, P. (2006). Comparison of forelimb function between Deinonychus and Bambiraptor (Theropoda: Dromaeosauridae). Journal of Vertebrate Paleontology 26, 897-906.
- ^ JA Long, P Schouten. (2008). Feathered Dinosaurs: The Origin of Birds
- ^ a b Yalden, D. W. (1985). Forelimb function in Archaeopteryx. In Hecht, M. K., Ostrom, J. H., Viohl, G. & Wellnhofer, P. (eds) The Beginnings of Birds - Proceedings of the International Archaeopteryx Conference, Eichstatt 1984, pp. 91-97.
- ^ Chen, P.-J., Dong, Z.-M. & Zhen, S.-N. (1998). An exceptionally well-preserved theropod dinosaur from the Yixian Formation of China. Nature 391, 147-152.
- ^ a b c Currie, Philip J.; Pei-ji Chen. (2001). Anatomy of Sinosauropteryx prima from Liaoning, northeastern China. Canadian Journal of Earth Sciences 38, 1705-1727. doi:10.1139/cjes-38-12-1705
- ^ Bohlin, B. 1947. The wing of Archaeornithes. Zoologiska Bidrag 25, 328-334.
- ^ Rietschel, S. (1985). Feathers and wings of Archaeopteryx, and the question of her flight ability. In Hecht, M. K., Ostrom, J. H., Viohl, G. & Wellnhofer, P. (eds) The Beginnings of Birds - Proceedings of the International Archaeopteryx Conference, Eichstatt 1984, pp. 251-265.
- ^ a b Griffiths, P. J. 1993. The claws and digits of Archaeopteryx lithographica. Geobios 16, 101-106.
- ^ Stephan, B. 1994. The orientation of digital claws in birds. Journal fur Ornithologie 135, 1-16.
- ^ a b c Chiappe, L.M. and Witmer, L.M. (2002). Mesozoic Birds: Above the Heads of Dinosaurs. Berkeley: University of California Press, ISBN 0520200942
- ^ Martin, L. D. & Lim, J.-D. (2002). Soft body impression of the hand in Archaeopteryx. Current Science 89, 1089-1090.
- ^ a b c d Feduccia, A. (1999). The Origin and Evolution of Birds. 420 pp. Yale University Press, New Haven. ISBN 0300078617.
- ^ a b Dyke, G.J., and Norell, M.A. (2005). "Caudipteryx as a non-avialan theropod rather than a flightless bird." Acta Palaeontologica Polonica, 50(1): 101–116. PDF fulltext
- ^ a b c Witmer, L.M. (2002). “The Debate on Avian Ancestry; Phylogeny, Function and Fossils”, Mesozoic Birds: Above the Heads of Dinosaurs : 3–30. ISBN 0-520-20094-2
- ^ Jones T.D., Ruben J.A., Martin L.D., Kurochkin E.N., Feduccia A., Maderson P.F.A., Hillenius W.J., Geist N.R., Alifanov V. (2000). Nonavian feathers in a Late Triassic archosaur. Science 288: 2202-2205.
- ^ Martin, Larry D. (2006). "A basal archosaurian origin for birds". Acta Zoologica Sinica 50 (6): 977–990.
- ^ Burke, Ann C.; & Feduccia, Alan. (1997). "Developmental patterns and the identification of homologies in the avian hand". Science 278 (5338): 666–668. doi:10.1126/science.278.5338.666.
- ^ a b Kevin Padian (2000). Dinosaurs and Birds — an Update. Reports of the National Center for Science Education. 20 (5):28–31.
- ^ Ostrom J.H. (1973). The ancestry of birds. Nature 242: 136.
- ^ a b Padian, Kevin. (2004). "Basal Avialae". in Weishampel, David B.; Dodson, Peter; & Osmólska, Halszka (eds.). The Dinosauria (Second ed.). Berkeley: University of California Press. pp. 210–231. ISBN 0-520-24209-2.
- ^ Olshevsky, G. (1991). A Revision of the Parainfraclass Archosauria Cope, 1869, Excluding the Advanced Crocodylia. Publications Requiring Research, San Diego.
- ^ Olshevsky, G. (1994). The birds first? A theory to fit the facts. Omni 16 (9), 34-86.
- ^ a b Chatterjee, S. (1999): Protoavis and the early evolution of birds. Palaeontographica A 254: 1-100.
- ^ Chatterjee, S. (1995): The Triassic bird Protoavis. Archaeopteryx 13: 15-31.
- ^ Chatterjee, S. (1998): The avian status of Protoavis. Archaeopteryx 16: 99-122.
- ^ Chatterjee, S. (1991). "Cranial anatomy and relationships of a new Triassic bird from Texas." Philosophical Transactions of the Royal Society B: Biological Sciences, 332: 277-342. HTML abstract
- ^ Paul, G.S. (2002). Dinosaurs of the Air: The Evolution and Loss of Flight in Dinosaurs and Birds. Johns Hopkins University Press, Baltimore. ISBN 0-8018-6763-0
- ^ Witmer, L. (2002). "The debate on avian ancestry: phylogeny, function, and fossils." Pp. 3-30 in: Chiappe, L.M. and Witmer, L.M. (eds), Mesozoic birds: Above the heads of dinosaurs. University of California Press, Berkeley, Calif., USA. ISBN 0-520-20094-2
- ^ Nesbitt, Sterling J.; Irmis, Randall B. & Parker, William G. (2007): A critical re-evaluation of the Late Triassic dinosaur taxa of North America. Journal of Systematic Palaeontology 5(2): 209-243.
- ^ Ostrom, J. (1987): Protoavis, a Triassic bird? Archaeopteryx 5: 113-114.
- ^ Ostrom, J.H. (1991): The bird in the bush. Nature 353(6341): 212.
- ^ Ostrom, J.H. (1996): The questionable validity of Protoavis. Archaeopteryx 14: 39-42.
- ^ Chatterjee, S. (1987). "Skull of Protoavis and Early Evolution of Birds." Journal of Vertebrate Paleontology, 7(3)(Suppl.): 14A.
- ^ a b EvoWiki (2004). Chatterjee's Chimera: A Cold Look at the Protoavis Controversy. Version of 2007-JAN-22. Retrieved 2009-FEB-04.
- ^ Chatterjee, S. (1997). The Rise of Birds: 225 Million Years of Evolution. Johns Hopkins University Press, Baltimore. ISBN 0-8018-5615-9
- ^ Feduccia, Alan (1994) "The Great Dinosaur Debate" Living Bird. 13:29-33.
- ^ Why Birds Aren't Dinosaurs. Explore:Thought and Discovery at the University of Kansas. Accessed 8/05/09.
- ^ Jensen, James A. & Padian, Kevin. (1989) "Small pterosaurs and dinosaurs from the Uncompahgre fauna (Brushy Basin member, Morrison Formation: ?Tithonian), Late Jurassic, western Colorado" Journal of Paleontology Vol. 63 no. 3 pg. 364-373.
- ^ Lubbe, T. van der, Richter, U., and Knötschke, N. 2009. Velociraptorine dromaeosaurid teeth from the Kimmeridgian (Late Jurassic) of Germany. Acta Palaeontologica Polonica 54 (3): 401–408. DOI: 10.4202/app.2008.0007.
- ^ a b c d Hartman, S., Lovelace, D., and Wahl, W., (2005). "Phylogenetic assessment of a maniraptoran from the Morrison Formation." Journal of Vertebrate Paleontology, 25, Supplement to No. 3, pp 67A-68A http://www.bhbfonline.org/AboutUs/Lori.pdf
- ^ Brochu, Christopher A. Norell, Mark A. (2001) "Time and trees: A quantitative assessment of temporal congruence in the bird origins debate" pp.511-535 in "New Perspectives on the Origin and Early Evolution of Birds" Gauthier&Gall, ed. Yale Peabody Museum. New Haven, Conn. USA.
- ^ a b Ruben, J., Jones, T. D., Geist, N. R. & Hillenius, W. J. (1997). Lung structure and ventilation in theropod dinosaurs and early birds. Science 278, 1267-1270.
- ^ a b Ruben, J., Dal Sasso, C., Geist, N. R., Hillenius, W. J., Jones, T. D. & Signore, M. (1999). Pulmonary function and metabolic physiology of theropod dinosaurs. Science 283, 514-516.
- ^ Quick, D. E. & Ruben, J. A. (2009). Cardio-pulmonary anatomy in theropod dinosaurs: implications from extant archosaurs. Journal of Morphology doi: 10.1002/jmor.10752
- ^ gazettetimes.com article
- ^ Discovery Raises New Doubts About Dinosaur-bird Links ScienceDaily article
- ^ Ruben, J., Hillenius, W., Geist, N. R., Leitch, A., Jones, T. D., Currie, P. J., Horner, J. R. & Espe, G. (1996). The metabolic status of some Late Cretaceous dinosaurs. Science 273, 1204-1207.
- ^ Theagarten Lingham-Soliar (2003). The dinosaurian origin of feathers: perspectives from dolphin (Cetacea) collagen fibers. Naturwissenschaften 90 (12): 563-567.
- ^ Peter Wellnhofer (2004) "Feathered Dragons: Studies on the Transition from Dinosaurs to Birds. Chapter 13. The Plumage of Archaeopteryx:Feathers of a Dinosaur?" Currie, Koppelhaus, Shugar, Wright. Indiana University Press. Bloomington, IN. USA. pp. 282-300.
- ^ Lingham-Soliar, T et al. (2007) Proc. R. Soc. Lond. B doi:10.1098/rspb.2007.0352.
- ^ Access : Bald dino casts doubt on feather theory : Nature News
- ^ "Transcript: The Dinosaur that Fooled the World". BBC. http://www.bbc.co.uk/science/horizon/2001/dinofooltrans.shtml. Retrieved on 2006-12-22.
- ^ Mayell, Hillary (2002-11-20). "Dino Hoax Was Mainly Made of Ancient Bird, Study Says". National Geographic. http://news.nationalgeographic.com/news/2002/11/1120_021120_raptor.html. Retrieved on 2008-06-13.
- ^ Zhou, Zhonghe, Clarke, Julia A., Zhang, Fucheng. "Archaeoraptor's better half." Nature Vol. 420. 21 November 2002. pp. 285.
- ^ a b Makovicky, Peter J.; Apesteguía, Sebastián; & Agnolín, Federico L. (2005). "The earliest dromaeosaurid theropod from South America". Nature 437 (7061): 1007–1011. doi:10.1038/nature03996.
- ^ Norell, M.A., Clark, J.M., Turner, A.H., Makovicky, P.J., Barsbold, R., and Rowe, T. (2006). "A new dromaeosaurid theropod from Ukhaa Tolgod (Omnogov, Mongolia)." American Museum Novitates, 3545: 1-51.
- ^ a b Forster, Catherine A.; Sampson, Scott D.; Chiappe, Luis M. & Krause, David W. (1998). "The Theropod Ancestry of Birds: New Evidence from the Late Cretaceous of Madagascar". Science (5358): pp. 1915–1919. doi:10.1126/science.279.5358.1915. (HTML abstract).
- ^ a b Chiappe, L.M.. Glorified Dinosaurs: The Origin and Early Evolution of Birds. Sydney: UNSW Press.
- ^ a b Kurochkin, E., N. (2006). Parallel Evolution of Theropod Dinoaurs and Birds. Entomological Review 86 (1), pp. S45-S58. doi:10.1134/S0013873806100046
- ^ Kurochkin, E., N. (2004). A Four-Winged Dinosaur and the Origin of Birds. Priroda 5, 3-12.
- ^ a b c S. Chatterjee. (2005). The Feathered Dinosaur Microraptor:Its Biplane Wing Platform and Flight Performance. 2005 Salt Lake City Annual Meeting.
- ^ a b c d Chatterjee, S., and Templin, R.J. (2007). "Biplane wing platform and flight performance of the feathered dinosaur Microraptor gui." Proceedings of the National Academy of Sciences, 104(5): 1576-1580.
- ^ a b c Holtz, Thomas R.; & Osmólska, Halszka. (2004). "Saurischia". in Weishampel, David B.; Dodson, Peter; & Osmólska, Halszka (eds.). The Dinosauria (Second ed.). Berkeley: University of California Press. pp. 21–24. ISBN 0-520-24209-2.
- ^ a b c d e Xu, Xing; Zheng Xiao-ting; You, Hai-lu (20 January 2009). "A new feather type in a nonavian theropod and the early evolution of feathers". Proceedings of the National Academy of Sciences (Philadelphia). doi:10.1073/pnas.0810055106. PMID 19139401.
- ^ a b c Turner, Alan H.; Hwang, Sunny; & Norell, Mark A. (2007). "A small derived theropod from Öösh, Early Cretaceous, Baykhangor, Mongolia". American Museum Novitates 3557 (3557): 1–27. doi:10.1206/0003-0082(2007)3557[1:ASDTFS]2.0.CO;2. http://hdl.handle.net/2246/5845.
- ^ a b Bryner, Jeanna (2009). "Ancient Dinosaur Wore Primitive Down Coat." http://www.foxnews.com/story/0,2933,479875,00.html
- ^ a b Xu, X., Cheng, C., Wang, X. & Chang, C. (2003). Pygostyle-like structure from Beipiaosaurus (Theropoda, Therizinosauroidea) from the Lower Cretaceous Yixian Formation of Liaoning, China. Acta Geologica Sinica 77, 294-298.
- ^ a b c Xu Xing; Zhou Zhonghe & Prum, Richard A. (2003). "Branched integumental structures in Sinornithosaurus and the origin of feathers". Nature 410 (6825): 200–204. doi:10.1038/35065589.
- ^ Paul, Gregory S. (2008). "The extreme lifestyles and habits of the gigantic tyrannosaurid superpredators of the Late Cretaceous of North America and Asia". in Carpenter, Kenneth; and Larson, Peter E. (editors). Tyrannosaurus rex, the Tyrant King (Life of the Past). Bloomington: Indiana University Press. p. 316. ISBN 0-253-35087-5.
- ^ Martin, Larry D.; & Czerkas, Stephan A. (2000). "The fossil record of feather evolution in the Mesozoic". American Zoologist 40 (4): 687–694. doi:10.1668/0003-1569(2000)040[0687:TFROFE]2.0.CO;2. http://www.bioone.org/perlserv/?request=get-abstract&doi=10.1668%2F0003-1569%282000%29040%5B0687%3ATFROFE%5D2.0.CO%3B2.
- ^ a b T. rex was fierce, yes, but feathered, too.
- ^ Nicholas M. Gardner, David B. Baum, Susan Offner. (2008). [392b%3ANDEFFI2.0.CO%3B2 No Direct Evidence for Feathers in Tyrannosaurus rex]. The American Biology Teacher 70(7):392-392
- ^ a b Xu, X., Wang, X.-L., and Wu, X.-C. (1999). "A dromaeosaurid dinosaur with a filamentous integument from the Yixian Formation of China". Nature 401: 262–266. doi:10.1038/45769.
- ^ a b c d e f g h i j k Turner, A.H.; Makovicky, P.J.; and Norell, M.A. (2007). "Feather quill knobs in the dinosaur Velociraptor" (pdf). Science 317 (5845): 1721. doi:10.1126/science.1145076. PMID 17885130. http://www.sciencemag.org/cgi/reprint/317/5845/1721.pdf.
- ^ a b Ji, Q., Norell, M. A., Gao, K.-Q., Ji, S.-A. & Ren, D. (2001). The distribution of integumentary structures in a feathered dinosaur. Nature 410, 1084-1088.
- ^ a b American Museum of Natural History. "Velociraptor Had Feathers." ScienceDaily 20 September 2007. 23 January 2008 http://www.sciencedaily.com/releases/2007/09/070920145402.htm
- ^ a b c d e f g h Xu, X., and Zhang, F. (2005). "A new maniraptoran dinosaur from China with long feathers on the metatarsus." Naturwissenschaften, 92(4): 173 - 177.
- ^ a b c d Xu, X., Zhao, Q., Norell, M., Sullivan, C., Hone, D., Erickson, G., Wang, X., Han, F. and Guo, Y. (2009). "A new feathered maniraptoran dinosaur fossil that fills a morphological gap in avian origin." Chinese Science Bulletin, 6 pages, accepted November 15, 2008.
- ^ Currie, PJ & Chen, PJ (2001) Anatomy of Sinosauropteryx prima from Liaoning, northeastern China, Canadian Journal of Earth Sciences, 38: 1,705-1,727.
- ^ Buffetaut, E., Grellet-Tinner, G., Suteethorn, V., Cuny, G., Tong, H., Košir, A., Cavin, L., Chitsing, S., Griffiths, P.J., Tabouelle, J. and Le Loeuff, J. (2005). "Minute theropod eggs and embryo from the Lower Cretaceous of Thailand and the dinosaur-bird transition." Naturwissenschaften, 92(10): 477-482.
- ^ a b c Czerkas, S.A., and Yuan, C. (2002). "An arboreal maniraptoran from northeast China." Pp. 63-95 in Czerkas, S.J. (Ed.), Feathered Dinosaurs and the Origin of Flight. The Dinosaur Museum Journal 1. The Dinosaur Museum, Blanding, U.S.A. PDF abridged version
- ^ Maryanska, T., Osmolska, H., & Wolsam, M. (2002). "Avialian status for Oviraptorosauria". Acta Palaeontologica Polonica 47 (1): 97–116.
- ^ Benton, M. J. (2004). Vertebrate Palaeontology, 3rd ed. Blackwell Science Ltd.
- ^ a b Turner, Alan H.; Pol, Diego; Clarke, Julia A.; Erickson, Gregory M.; and Norell, Mark (2007). "A basal dromaeosaurid and size evolution preceding avian flight" (pdf). Science 317: 1378–1381. doi:10.1126/science.1144066. PMID 17823350. http://www.sciencemag.org/cgi/reprint/317/5843/1378.pdf.
- ^ a b c d Barsbold, R., Osmólska, H., Watabe, M., Currie, P.J., and Tsogtbaatar, K. (2000). "New Oviraptorosaur (Dinosauria, Theropoda) From Mongolia: The First Dinosaur With A Pygostyle". Acta Palaeontologica Polonica, 45(2): 97-106.
- ^ C.M. Chuong, R. Chodankar, R.B. Widelitz (2000). Evo-Devo of feathers and scales: building complex epithelial appendages. Commentary, Current Opinion in Genetics & Development 10 (4), pp. 449-456.
- ^ a b Kurzanov, S.M. (1987). "Avimimidae and the problem of the origin of birds." Transactions of the Joint Soviet-Mongolian Paleontological Expedition, 31: 5-92. [in Russian]
- ^ a b Hopp, Thomas J., Orsen, Mark J. (2004) "Feathered Dragons: Studies on the Transition from Dinosaurs to Birds. Chapter 11. Dinosaur Brooding Behavior and the Origin of Flight Feathers" Currie, Koppelhaus, Shugar, Wright. Indiana University Press. Bloomington, IN. USA.
- ^ Maryańska, T. & Osmólska, H. (1997). The Quadrate of Oviraptorid Dinosaurs. Acta Palaeontologica Polonica 42 (3): 361-371.
- ^ Jones, T.D., Farlow, J.O., Ruben, J.A., Henderson, D.M., and Hillenius, W.J. (2000). "Cursoriality in bipedal archosaurs." Nature, 406(6797): 716–718. doi:10.1038/35021041 PDF fulltext Supplementary information
- ^ Zhou, Z., Wang, X., Zhang, F., and Xu, X. (2000). "Important features of Caudipteryx - Evidence from two nearly complete new specimens." Vertebrata Palasiatica, 38(4): 241–254. PDF fulltext
- ^ Buchholz, P. (1997). Pelecanimimus polyodon. Dinosaur Discoveries 3, 3-4.
- ^ Briggs, D. E., Wilby, P. R., Perez-Moreno, B., Sanz, J. L. & Fregenal-Martinez, M. (1997). The mineralization of dinosaur soft tissue in the Lower Cretaceous of Las Hoyas, Spain. Journal of the Geological Society, London 154, 587-588.
- ^ a b Theagarten Lingham-Soliar. (2008). A unique cross section through the skin of the dinosaur Psittacosaurus from China showing a complex fibre architecture. Proc R Soc B 275: 775-780.
- ^ a b c d e f g h i j k Zheng, X.-T., You, H.-L., Xu, X. and Dong, Z.-M. (2009). "An Early Cretaceous heterodontosaurid dinosaur with filamentous integumentary structures." Nature, 458(19): 333-336. doi:10.1038/nature07856
- ^ Witmer, L.M. (2009), "Dinosaurs: Fuzzy origins for feathers", Nature 458 (7236): 293–295, http://www.nature.com/nature/journal/v458/n7236/full/458293a.html, retrieved on 2009-09-02
- ^ "Tianyulong". Pharyngula. PZ Myers. March 20, 2009. http://scienceblogs.com/pharyngula/2009/03/tianyulong.php. Retrieved on 2009-04-30.
- ^ a b "Tianyulong - a fuzzy dinosaur that makes the origin of feathers fuzzier". Not Exactly Rocket Science:Science for Everyone. Ed Yong. March 18, 2009. http://scienceblogs.com/notrocketscience/2009/03/tianyulong_-_a_fuzzy_dinosaur_that_makes_the_origin_of_feath.php. Retrieved on 2009-07-22.
- ^ Xu, X., Wang, X., Wu, X., (1999). A dromaeosaurid dinosaur with a filamentous integument from the Yixian Formation of China. Nature 401:6750 262-266 doi 10.1038/45769
- ^ Xu. X., Zhao, X., Clark, J.M., (1999). A new therizinosaur from the Lower Jurassic lower Lufeng Formation of Yunnan, China. Journal of Vertebrate Paleontology 21:3 477–483 doi 10.1671/0272-4634
- ^ Xu, X. and Wang, X.-L. (2003). "A new maniraptoran from the Early Cretaceous Yixian Formation of western Liaoning." Vertebrata PalAsiatica, 41(3): 195–202.
- ^ Ji, Q., Ji, S., Lu, J., You, H., Chen, W., Liu, Y., and Liu, Y. (2005). "First avialan bird from China (Jinfengopteryx elegans gen. et sp. nov.)." Geological Bulletin of China, 24(3): 197-205.
- ^ Ji, S., Ji, Q., Lu J., and Yuan, C. (2007). "A new giant compsognathid dinosaur with long filamentous integuments from Lower Cretaceous of Northeastern China." Acta Geologica Sinica, 81(1): 8-15.
- ^ Czerkas, S.A., and Ji, Q. (2002). "A new rhamphorhynchoid with a headcrest and complex integumentary structures." Pp. 15-41 in: Czerkas, S.J. (Ed.). Feathered Dinosaurs and the Origin of Flight. Blanding, Utah: The Dinosaur Museum. ISBN 1-93207-501-1.
- ^ a b c Senter, Phil (2007). "A new look at the phylogeny of Coelurosauria (Dinosauria: Theropoda)". Journal of Systematic Palaeontology 5 (4): 429–463. doi:10.1017/S1477201907002143.
- ^ Osmólska, Halszka; Maryańska, Teresa; & Wolsan, Mieczysław. (2002). "Avialan status for Oviraptorosauria". Acta Palaeontologica Polonica 47 (1): 97–116. http://app.pan.pl/article/item/app47-097.html.
- ^ Martinelli, Agustín G.; & Vera, Ezequiel I. (2007). "Achillesaurus manazzonei, a new alvarezsaurid theropod (Dinosauria) from the Late Cretaceous Bajo de la Carpa Formation, Río Negro Province, Argentina". Zootaxa 1582: 1–17. http://www.mapress.com/zootaxa/2007f/z01582p017f.pdf.
- ^ Novas, Fernando E.; & Pol, Diego. (2002). "Alvarezsaurid relationships reconsidered". in Chiappe, Luis M.; & Witmer, Lawrence M. (eds.). Mesozoic Birds: Above the Heads of Dinosaurs. Berkeley: University of California Press. pp. 121–125. ISBN 0-520-20094-2.
- ^ Sereno, Paul C. (1999). "The evolution of dinosaurs". Science 284 (5423): 2137–2147. doi:10.1126/science.284.5423.2137. PMID 10381873.
- ^ Perle, Altangerel; Norell, Mark A.; Chiappe, Luis M.; & Clark, James M. (1993). "Flightless bird from the Cretaceous of Mongolia". Science 362 (6421): 623–626. doi:10.1038/362623a0.
- ^ Chiappe, Luis M.; Norell, Mark A.; & Clark, James M. (2002). "The Cretaceous, short-armed Alvarezsauridae: Mononykus and its kin". in Chiappe, Luis M.; & Witmer, Lawrence M. (eds.). Mesozoic Birds: Above the Heads of Dinosaurs. Berkeley: University of California Press. pp. 87–119. ISBN 0-520-20094-2.
- ^ Forster, Catherine A.; Sampson, Scott D.; Chiappe, Luis M.; & Krause, David W. (1998). "The theropod ancestry of birds: new evidence from the Late Cretaceous of Madagascar". Science 279 (5358): 1915–1919. doi:10.1126/science.279.5358.1915. PMID 9506938.
- ^ Mayr, Gerald; Pohl, Burkhard; & Peters, D. Stefan (2005). "A well-preserved Archaeopteryx specimen with theropod features.". Science 310 (5753): 1483–1486. doi:10.1126/science.1120331. PMID 16322455.
- ^ Göhlich, U.B., and Chiappe, L.M. (2006). "A new carnivorous dinosaur from the Late Jurassic Solnhofen archipelago." Nature, 440: 329-332.
- Gauthier, J.; De Queiroz, K. (2001), "Feathered dinosaurs, flying dinosaurs, crown dinosaurs, and the name" Aves", New Perspectives on the Origin and Early Evolution of Birds: 7–41.
- Fucheng, Z.; Zhonghe, Z.; Dyke, G. (2006), "Feathers and'feather-like'integumentary structures in Liaoning birds and dinosaurs", Geological Journal 41.
- Zhou, Z. (2004), "The origin and early evolution of birds: discoveries, disputes, and perspectives from fossil evidence", Naturwissenschaften 91 (10): 455–471.
- Vargas, A.O.; Fallon, J.F. (2005), "Birds have dinosaur wings: the molecular evidence", J Exp Zool (Mol Dev Evol) 304: 86–90.
- Prum, R.O. (2002), "Why ornithologists should care about the theropod origin of birds", The Auk 119 (1): 1–17.
- Clark, J.M.; Norell, M.A.; Makovicky, P.J. (2002). "Cladistic approaches to the relationships of birds to other theropod dinosaurs". Mesozoic birds, above the heads of the dinosaurs. pp. 31–61.
- Perrichot, V.; Marion, L.; Néraudeau, D.; Vullo, R.; Tafforeau, P. (2008), "The early evolution of feathers: fossil evidence from Cretaceous amber of France", Proceedings of the Royal Society B: Biological Sciences 275 (1639): 1197.
- DinoBuzz — dinosaur-bird controversy explained, by UC Berkeley.
- Journal of Dinosaur Paleontology, with many articles on dinosaur-bird links.
- Feathered dinosaurs at the American Museum of Natural History.
- First Dinosaur Found With its Body Covering Intact; Displays Primitive Feathers From Head to Tail — AMNH Press Release
- Notes from recent papers on theropod dinosaurs and early avians
- The evolution of feathers | http://fossil.wikia.com/wiki/Feathered_dinosaurs | 13 |
17 | The Roman Republic was governed by a complex constitution, which centered on the principles of a separation of powers and checks and balances. The evolution of the constitution was heavily influenced by the struggle between the aristocracy and the average Roman. Early in its history, the republic was controlled by an aristocracy of individuals who could trace their ancestry back to the early history of the kingdom. Over time, the laws that allowed these individuals to dominate the government were repealed, and the result was the emergence of a new aristocracy which depended on the structure of society, rather than the law, to maintain its dominance. Thus, only a revolution could overthrow this new aristocracy.
Rome also saw its territory expand during this period, from central Italy to the entire Mediterranean world. During the first two centuries, Rome expanded to the point of dominating Italy. During the next century, Rome grew to dominate North Africa, Spain, Greece, and what is now southern France. During the last two centuries of the Roman Republic, Rome grew to dominate the rest of modern France, as well as much of the east. By this point, however, its republican political machinery was finally crushed under the weight of imperialism.
The precise event which signaled the transition of the Roman Republic into the Roman Empire is a matter of interpretation. Historians have variously proposed the appointment of Julius Caesar as perpetual dictator (44 BC), the Battle of Actium (2 September, 31 BC), and the Roman Senate's grant of Octavian's extraordinary powers under the first settlement (January 16, 27 BC), as candidates for the defining pivotal event.
The Senate's ultimate authority derived from the esteem and prestige of the Senate. This esteem and prestige was based on both precedent and custom, as well as the high caliber and prestige of the Senators. The Senate passed decrees, which were called senatus consultum. This was officially "advice" from the Senate to a magistrate. In practice, however, these were usually obeyed by the magistrates. The focus of the Roman Senate was directed towards foreign policy. While its role in military conflict was officially, the Senate was ultimately the force that oversaw those conflicts. The senate also managed the civil administration in the city and the town.
One check over a magistrate's power was his collegiality. Each magisterial office would be held concurrently by at least two people. Another check over the power of a magistrate was provocatio. Provocatio was a primordial form of due process. It was a precursor to our own habeas corpus. If any magistrate was attempting to use the powers of the state against a citizen, that citizen could appeal the decision of the magistrate to a tribune. In addition, once a magistrate's annual term in office expired, he would have to wait ten years before serving in that office again. Since this did create problems for some consuls and praetors, these magistrates would occasionally have their imperium extended. In effect, they would retain the powers of the office (as a promagistrate), without officially holding that office.
Praetors would administer civil law and command provincial armies. Every five years, two censors would be elected for an eighteen month term. During their term in office, the two censors would conduct a census. During the census, they could enroll citizens in the senate, or purge them from the senate. Aediles were officers elected to conduct domestic affairs in Rome, such as managing public games and shows. The quaestors would usually assist the consuls in Rome, and the governors in the provinces. Their duties were often financial.
Since the tribunes were considered to be the embodiment of the plebeians, they were sacrosanct. Their sacrosanctity was enforced by a pledge, taken by the plebeians, to kill any person who harmed or interfered with a tribune during his term of office. All of the powers of the tribune derived from their sacrosanctity. One obvious consequence of this sacrosanctity was the fact that it was considered a capital offense to harm a tribune, to disregard his veto, or to interfere with a tribune.
In times of military emergency, a dictator would be appointed for a term of six months. Constitutional government would dissolve, and the dictator would become the absolute master of the state. When the dictator's term ended, constitutional government would be restored.
In the year 494 BC, the city was at war with two neighboring tribes. The plebeian soldiers refused to march against the enemy, and instead seceded to the Aventine hill. The plebeians demanded the right to elect their own officials. The patricians agreed, and the plebeians returned to the battlefield. The plebeians called these new officials "plebeian tribunes". The tribunes would have two assistants, called "plebeian aediles". In 367 BC a law was passed, which required the election of at least one plebeian aedile each year. In 443 BC, the censorship was created, and in 366 BC, the praetorship was created. Also in 366 BC, the curule aedileship was created. Shortly after the founding of the republic, the Comitia Centuriata ("Assembly of the Centuries") became the principle legislative assembly. In this assembly, magistrates were elected, and laws were passed.
During the fourth century BC, a series of reforms were passed. The result of these reforms was that any law passed by the Plebeian Council would have the full force of law. This gave the tribunes (who presided over the Plebeian Council) a positive character for the first time. Before these laws were passed, the only power that the tribunes held was that of the veto.
In 342 BC, two significant laws were passed. One of these two laws made it illegal to hold more than one office at any given point in time. The other law required an interval of ten years to pass before any magistrate could seek reelection to any office.
During these years, the tribunes and the senators grew increasingly close. The senate realized the need to use plebeian officials to accomplish desired goals. To win over the tribunes, the senators gave the tribunes a great deal of power and the tribunes began to feel obligated to the senate. As the tribunes and the senators grew closer, plebeian senators were often able to secure the tribunate for members of their own families. In time, the tribunate became a stepping stone to higher office.
Around the middle of the fourth century BC, the Concilium Plebis enacted the "Ovinian Law". During the early republic, only consuls could appoint new senators. The Ovinian law, however, gave this power to the censors. It also required the censor to appoint any newly-elected magistrate to the senate. By this point, plebeians were already holding a significant number of magisterial offices. Thus, the number of plebeian senators probably increased quickly. However, it remained difficult for a plebeian to enter the senate if he was not from a well-known political family, as a new patrician-like plebeian aristocracy emerged. The old nobility existed through the force of law, because only patricians were allowed to stand for high office. The new nobility existed due to the organization of society. As such, only a revolution could overthrow this new structure.
By 287 BC, the economic condition of the average plebeian had become poor. The problem appears to have centered around wide-spread indebtedness. The plebeians demanded relief, but the senators refused to address their situation. The result was the final plebeian secession. The plebeians seceded to the Janiculum hill. To end the secession, a dictator was appointed. The dictator passed a law (the "Hortensian Law"), which ended the requirement that the patrician senators must agree before any bill could be considered by the Plebeian Council. This was not the first law to require that an act of the Plebeian Council have the full force of law. The Plebeian Council acquired this power during a modification to the original Valerian law in 449 BC. The significance of this law was in the fact that it robbed the patricians of their final weapon over the plebeians. The result was that control over the state fell, not onto the shoulders of voters in a democracy, but to the new plebeian nobility.
The plebeians had finally achieved political equality with the patricians. However, the plight of the average plebeian had not changed. A small number of plebeian families achieved the same standing that the old aristocratic patrician families had always had, but the new plebeian aristocrats became as uninterested in the plight of the average plebeian as the old patrician aristocrats had always been.
The final decades of this era saw a worsening economic situation for many plebeians. The long military campaigns had forced citizens to leave their farms to fight, only to return to farms that had fallen into disrepair. The landed aristocracy began buying bankrupted farms at discounted prices. As commodity prices fell, many farmers could no longer operate their farms at a profit. The result was the ultimate bankruptcy of countless farmers. Masses of unemployed plebeians soon began to flood into Rome, and thus into the ranks of the legislative assemblies. Their economic state usually led them to vote for the candidate who offered the most for them. A new culture of dependency was emerging, which would look to any populist leader for relief.
Tiberius Gracchus was elected tribune in 133 BC. He attempted to enact a law which would have limited the amount of land that any individual could own. The aristocrats, who stood to lose an enormous amount of money, were bitterly opposed to this proposal. Tiberius submitted this law to the Plebeian Council, but the law was vetoed by a tribune named Marcus Octavius. Tiberius then used the Plebeian Council to impeach Octavius. The theory, that a representative of the people ceases to be one when he acts against the wishes of the people, was counter to Roman constitutional theory. If carried to its logical end, this theory would remove all constitutional restraints on the popular will, and put the state under the absolute control of a temporary popular majority. His law would be enacted, but Tiberius would be murdered when he stood for reelection to the tribunate.
Tiberius' brother Gaius was elected tribune in 123 BC. Gaius Gracchus' ultimate goal was to weaken the senate and to strengthen the democratic forces. In the past, for example, the senate would eliminate political rivals either by establishing special judicial commissions or by passing a senatus consultum ultimum ("ultimate decree of the senate"). Both devices would allow the senate to bypass the ordinary due process rights that all citizens had. Gaius outlawed the judicial commissions, and declared the senatus consultum ultimum to be unconstitutional. Gaius then proposed a law which would grant citizenship rights to Rome's Italian allies. By this point, however, the selfish democracy of Rome deserted him. He stood for election to a third term in 121 BC, but was defeated and then murdered. The democracy, however, had finally realized how weak the senate had become.
Several years later, a new power had emerged in Asia. In 88 BC, a Roman army was sent to put down that power, king Mithridates of Pontus. The army, however, was defeated. One of Marius' old quaestors, Lucius Cornelius Sulla, had been elected consul for the year. Sulla was then ordered by the senate to assume command of the war against Mithridates. Marius, a member of the democratic ("populare") party, had a tribune revoke Sulla's command of the war against Mithridates. Sulla, a member of the aristocratic ("optimate") party, brought his army back to Italy and marched on Rome. Sulla had become so angry at Marius' tribune that he passed a law that was intended to permanently weaken the tribunate. He then returned to his war against Mithridates. With Sulla gone, the populares under Marius and Lucius Cornelius Cinna soon took control of the city.
The populare record was not one to be proud of. They reelected Marius consul several times without observing the customary ten year interval between offices. They also transgressed democracy by advancing unelected individuals to magisterial office, and by substituting magisterial edicts for popular legislation. Sulla soon made peace with Mithridates. In 83 BC, he returned to Rome, overcame all resistance, and captured the city again. Sulla and his supporters then slaughtered most of Marius' supporters. Sulla, who had observed the violent results of radical populare reforms was naturally conservative. As such, he sought to strengthen the aristocracy, and thus the senate. Sulla made himself dictator, passed a series of constitutional reforms, resigned the dictatorship, and served one last term as consul. He died in 78 BC.
Around 66 BC, a movement to use constitutional, or at least peaceful, means to address the plight of various classes began. After several failures, the movement's leaders decided to use any means that were necessary to accomplish their goals. The movement coalesced under an aristocrat named Lucius Sergius Catiline. The movement was based in the town of Faesulae, which was a natural hotbed of agrarian agitation. The rural malcontents were to advance on Rome, and be aided by an uprising within the city. After assassinating the consuls and most of the senators, Catiline would be free to enact his reforms. The conspiracy was set in motion in 63 BC. The consul for the year, Marcus Tullius Cicero, intercepted messages that Catiline had sent in an attempt to recruit more members. The result of this was that the top conspirators in Rome (including at least one former consul) were executed upon an authorization (of dubious constitutionality) by the senate, and the planned uprising was disrupted. Cicero then sent an army, which cut Catiline's forces to pieces.
The most important result of the Catilinarian conspiracy was that the populare party became discredited. The prior 70 years had witnessed a gradual erosion in senatorial powers. The violent nature of the conspiracy, in conjunction with the senate's skill in disrupting it, did a great deal to repair the senate's image.
Caesar became consul in 59 BC. His colleague, Marcus Calpurnius Bibulus, was an extreme aristocrat. Caesar submitted the laws that he had promised Pompey to the assemblies. Bibulus attempted to obstruct the enactment of these laws, and so Caesar used violent means to ensure their passage. Caesar was then made governor of three provinces. He then facilitated the election of the former patrician Clodius to the tribunate for 58 BC. Clodius set about depriving the faction Caesar's senatorial enemies of two of their more obstinate leaders in Cato and Cicero. Clodius was a bitter opponent of Cicero because Cicero had testified against him in a sacrilege case. He attempted to try him for executing citizens without a trial during the Catiline conspiracy, resulting in Cicero going into self imposed exile and his house being burnt down. Clodius also passed a bill that forced Cato to lead the invasion of Cyprus which would keep him away from Rome for some years. Clodius passed a bill that gave a free grain dole, which had previously just been subsidised.
Beginning in the summer of 54 BC, a wave of political corruption and violence swept Rome. This chaos reached a climax in January of 52 BC, when Clodius was murdered in a gang war by Milo. On January 1 of 49 BC, an agent of Caesar presented an ultimatum to the senate. The ultimatum was rejected, and the senate then passed a resolution which declared that if Caesar did not lay down his arms by July of that year, he would be considered an enemy of the republic. On January 7 of 49 BC, the senate passed a senatus consultum ultimum, which vested Pompey with dictatorial powers. Pompey's army, however, was composed largely of untested conscripts. Caesar then crossed the Rubicon with his veteran army, and marched towards Rome. Caesar's rapid advance forced Pompey, the consuls and the senate to abandon Rome for Greece. Caesar then entered the city unopposed.
With Pompey defeated, and order restored, Caesar wanted to ensure that his control over the government was undisputed. The powers which he would give himself would ultimately be used by his imperial successors. He would assume these powers by increasing his own authority, and by decreasing the authority of Rome's other political institutions.
Caesar would hold both the dictatorship and the tribunate, but alternate between the consulship and the proconsulship. In 48 BC, Caesar was given permanent tribunician powers. This made his person sacrosanct, gave him the power to veto the senate, and allowed him to dominate the Plebeian Council. In 46 BC, Caesar was given censorial powers, which he used to fill the senate with his own partisans. Caesar then raised the membership of the senate to 900. This robbed the senatorial aristocracy of its prestige, and made it increasingly subservient to him. While the assemblies continued to meet, he submitted all candidates to the assemblies for election, and all bills to the assemblies for enactment. Thus, the assemblies became powerless, and were thus unable to oppose him.
Near the end of his life, Caesar began to prepare for a war against the Parthian Empire. Since his absence from Rome would limit his ability to install his own consuls, he passed a law which allowed him to appoint all magistrates in 43 BC, and all consuls and tribunes in 42 BC. This, in effect, transformed the magistrates from being representatives of the people, to being representatives of the dictator.
After his assassination, Mark Antony formed an alliance with Caesar's adopted son and great-nephew, Gaius Octavian. Along with Marcus Lepidus, they formed an alliance known as the Second Triumvirate. They held powers that were nearly identical to the powers that Caesar had held under his constitution. As such, the senate and assemblies remained powerless, even after Caesar had been assassinated. The conspirators were then defeated at the Battle of Philippi in 42 BC. Eventually, however, Antony and Octavian fought against each other in one last battle. Antony was defeated in the naval Battle of Actium in 31 BC, and in 30 BC he committed suicide. In 29 BC, Octavian returned to Rome as the unchallenged master of the state.
Life in the Roman Republic revolved around the city of Rome, and its famed seven hills. The city also had several theaters. gymnasiums, and many taverns, baths and brothels. Throughout the territory under Rome's control, residential architecture ranged from very modest houses to country villas, and in the capital city of Rome, to the residences on the elegant Palatine Hill, from which the word "palace" is derived. The vast majority of the population lived in the city center, packed into apartment blocks.
Most Roman towns and cities had a forum and temples, as did the city of Rome itself. Aqueducts were built to bring water to urban centers and wine and oil were imported from abroad. Landlords generally resided in cities and their estates were left in the care of farm managers. To stimulate a higher labor productivity, many landlords freed a large numbers of slaves.
Beginning in the middle of the second century BC, Greek culture was increasingly ascendant, in spite of tirades against the "softening" effects of Hellenized culture. By the time of Augustus, cultured Greek household slaves taught the Roman young (sometimes even the girls). Greek sculptures adorned Hellenistic landscape gardening on the Palatine or in the villas, and much Roman cuisine was essentially Greek. Roman writers disdained Latin for a cultured Greek style.
The center of the early social structure was the family, which was not only marked by blood relations but also by the legally constructed relation of patria potestas. The Pater familias was the absolute head of the family; he was the master over his wife, his children, the wives of his sons, the nephews, the slaves and the freedmen, disposing of them and of their goods at will, even putting them to death. Roman law recognized only patrician families as legal entities.
Slavery and slaves were part of the social order; there were slave markets where they could be bought and sold. Many slaves were freed by the masters for services rendered; some slaves could save money to buy their freedom. Generally mutilation and murder of slaves was prohibited by legislation. It is estimated that over 25% of the Roman population was enslaved.
The cloth and the dress distinguished one class of people from the other class. The tunic worn by plebeians (common people) like shepherds and slaves was made from coarse and dark material, whereas the tunic worn by patricians was of linen or white wool. A magistrate would wear the tunic augusticlavi; senators wore a tunic with broad stripes, called tunica laticlavi. Military tunics were shorter than the ones worn by civilians. Boys, up until the festival of Liberalia, wore the toga praetexta, which was a toga with a crimson or purple border. The toga virilis, (or toga pura) was worn by men over the age of 16 to signify their citizenship in Rome. The toga picta was worn by triumphant generals and had embroidery of their skill on the battlefield. The toga pulla was worn when in mourning.
Even footwear indicated a person’s social status. Patricians wore red and orange sandals, senators had brown footwear, consuls had white shoes, and soldiers wore heavy boots. Men typically wore a toga, and women a stola. The woman's stola looked different than a toga, and was usually brightly colored. The Romans also invented socks for those soldiers required to fight on the northern frontiers, sometimes worn in sandals.
Romans had simple food habits. Staple food was simple, generally consumed at around 11 o’clock, and consisted of bread, salad, cheese, fruits, nuts, and cold meat left over from the dinner the night before. The Roman poet, Horace mentions another Roman favorite, the olive, in reference to his own diet, which he describes as very simple: "As for me, olives, endives, and smooth mallows provide sustenance. The family ate together, sitting on stools around a table. Fingers were used to eat solid foods and spoons were used for soups.
Wine was considered a staple drink, consumed at all meals and occasions by all classes and was quite cheap. Cato the Elder once advised cutting his rations in half to conserve wine for the workforce. Many types of drinks involving grapes and honey were consumed as well. Drinking on an empty stomach was regarded as boorish and a sure sign for alcoholism, whose debilitating physical and psychological effects were known to the Romans. An accurate accusation of being an alcoholic was an effective way to discredit political rivals. Prominent Roman alcoholics included Mark Antony, and Cicero's own son Marcus (Cicero Minor). Even Cato the Younger was known to be a heavy drinker.
Following various military conquests in the Greek East, Romans adapted a number of Greek educational precepts to their own fledgling system. Home was often the learning center, where children were taught Roman law, customs, and physical training to prepare the boys to grow as Roman citizens and for eventual recruitment into the army. Conforming to discipline was a point of great emphasis. Girls generally received instruction from their mothers in the art of spinning, weaving ,and sewing.
Schooling in a more formal sense was begun around 200 BC. Education began at the age of around six, and in the next six to seven years, boys and girls were expected to learn the basics of reading, writing and counting. By the age of twelve, they would be learning Latin, Greek, grammar and literature, followed by training for public speaking. Oratory was an art to be practiced and learnt, and good orators commanded respect. To become an effective orator was one of the objectives of education and learning. In some cases, services of gifted slaves were utilized for imparting education.
The native language of the Romans was Latin. Although surviving Latin literature consists almost entirely of Classical Latin, an artificial and highly stylized and polished literary language from the 1st century BC, the actual spoken language was Vulgar Latin, which significantly differed from Classical Latin in grammar, vocabulary, and eventually pronunciation. Rome's expansion spread Latin throughout Europe, and over time Vulgar Latin evolved and dialectized in different locations, gradually shifting into a number of distinct Romance languages. Many of these languages, including French, Italian, Portuguese, Romanian and Spanish, flourished, the differences between them growing greater over time. Although English is Germanic rather than Romanic in origin, English borrows heavily from Latin and Latin-derived words.
Roman literature was from its very inception influenced heavily by Greek authors. Some of the earliest works we possess are of historical epics telling the early military history of Rome. As the republic expanded, authors began to produce poetry, comedy, history, and tragedy. Virgil represents the pinnacle of Roman epic poetry. His Aeneid tells the story of flight of Aeneas from Troy and his settlement of the city that would become Rome. Lucretius, in his On the Nature of Things, attempted to explicate science in an epic poem. The genre of satire was common in Rome, and satires were written by, among others, Juvenal and Persius. The rhetorical works of Cicero are considered to be some of the best bodies of correspondence recorded in antiquity.
In the 3rd century BC, Greek art taken as booty from wars became popular, and many Roman homes were decorated with landscapes by Greek artists. Portrait sculpture during the period utilized youthful and classical proportions, evolving later into a mixture of realism and idealism. Advancements were also made in relief sculptures, often depicting Roman victories.
Music was a major part of everyday life. The word itself derives from Greek μουσική (mousike), "(art) of the Muses". Many private and public events were accompanied by music, ranging from nightly dining to military parades and maneouvres. In a discussion of any ancient music, however, non-specialists and even many musicians have to be reminded that much of what makes our modern music familiar to us is the result of developments only within the last 1,000 years; thus, our ideas of melody, scales, harmony, and even the instruments we use would not be familiar to Romans who made and listened to music many centuries earlier.
Over time, Roman architecture was modified as their urban requirements changed, and the civil engineering and building construction technology became developed and refined. The Roman concrete has remained a riddle, and even after more than 2,000 years some Roman structures still stand magnificently. The architectural style of the capital city was emulated by other urban centers under Roman control and influence. Roman cities were well planned, efficiently managed and neatly maintained.
Roman religious beliefs date back to the founding of Rome, around 800 BC. However, the Roman religion commonly associated with the republic and early empire did not begin until around 500 BC, when Romans came in contact with Greek culture, and adopted many of the Greek’s religious beliefs. Private and personal worship was an important aspect of religious practices. In a sense, each household was a temple to the gods. Each household had an altar (lararium), at which the family members would offer prayers, perform rites, and interact with the household gods. Many of the gods that Romans worshiped came from the Proto-Indo-European pantheon, others were based on Greek gods. The two most famous deities were Jupiter (the king God) and Mars (the god of war). With its cultural influence spreading over most of the Mediterranean, Romans began accepting foreign gods into their own culture, as well as other philosophical traditions such as Cynicism and Stoicism.
Each first line maniple were leather-armoured infantry soldiers who wore a brass breastplate and a brass helmet adorned with 3 feathers approximately 30 cm (12 in) in height and carried an iron-clad wooden shield. They were armed with a sword and two throwing spears. The second infantry line was armed and armoured in the same manner as was the first infantry line. The second infantry line, however, wore a lighter coat of mail rather than a solid brass breastplate. The third infantry line was the last remnant of the hoplite-style (the Greek-style formation used occasionally during the early republic) troops in the Roman army. They were armed and armoured in the same manner as were the soldiers in the second line, with the exception that they carried a lighter spear.
The three infantry classes may have retained some slight parallel to social divisions within Roman society, but at least officially the three lines were based upon age and experience rather than social class. Young, unproven men would serve in the first line, older men with some military experience would serve in the second line, and veteran troops of advanced age and experience would serve in the third line.
The heavy infantry of the maniples were supported by a number of light infantry and cavalry troops, typically 300 horsemen per manipular legion. The cavalry was drawn primarily from the richest class of equestrians. There was an additional class of troops who followed the army without specific martial roles and were deployed to the rear of the third line. Their role in accompanying the army was primarily to supply any vacancies that might occur in the maniples. The light infantry consisted of 1,200 unarmoured skirmishing troops drawn from the youngest and lower social classes. They were armed with a sword and a small shield, as well as several light javelins.
A small navy had operated at a fairly low level after about 300 BC, but it was massively upgraded about forty years later, during the First Punic War. After a period of frenetic construction, the navy mushroomed to a size of more than 400 ships on the Carthaginian ("Punic") pattern. Once completed, it could accommodate up to 100,000 sailors and embarked troops for battle. The navy thereafter declined in size.
The extraordinary demands of the Punic Wars, in addition to a shortage of manpower, exposed the tactical weaknesses of the manipular legion, at least in the short term. In 217 BC, near the beginning of the Second Punic War, Rome was forced to effectively ignore its long-standing principle that its soldiers must be both citizens and property owners. During the second century BC, Roman territory saw an overall decline in population, partially due to the huge losses incurred during various wars. This was accompanied by severe social stresses and the greater collapse of the middle classes. As a result, the Roman state was forced to arm its soldiers at the expense of the state, which it had not had to do in the past.
The distinction between the heavy infantry types began to blur, perhaps because the state was now assuming the responsibility of providing standard-issue equipment. In addition, the shortage of available manpower led to a greater burden being placed upon Rome's allies for the provision of allied troops. Eventually, the Romans were forced to begin hiring mercenaries to fight alongside the legions.
Unlike earlier in the Republic, legionaries were no longer fighting on a seasonal basis to protect their land. Instead, they received standard pay, and were employed by the state on a fixed-term basis. As a consequence, military duty began to appeal most to the poorest sections of society, to whom a salaried pay was attractive. A destabilising consequence of this development was that the proletariat "acquired a stronger and more elevated position" within the state.
The legions of the late Republic were, structurally, almost entirely heavy infantry. The legion's main sub-unit was called a cohort and consisted of approximately 480 infantrymen. The cohort was therefore a much larger unit than the earlier maniple sub-unit, and was divided into six centuries of 80 men each. Each century was separated further into 10 "tent groups" of 8 men each. Legions additionally consisted of a small body, typically 120 men, of Roman legionary cavalry. The cavalry troops were used as scouts and dispatch riders rather than battlefield cavalry. Legions also contained a dedicated group of artillery crew of perhaps 60 men. Each legion was normally partnered with an approximately equal number of allied (non-Roman) troops.
However, "the most obvious deficiency" of the Roman army remained its shortage of cavalry, especially heavy cavalry. As Rome's borders expanded and its adversaries changed from largely infantry-based to largely cavalry-based troops, the infantry-based Roman army began to find itself at a tactical disadvantage, particularly in the East.
After having declined in size following the subjugation of the Mediterranean, the Roman navy underwent short-term upgrading and revitalisation in the late Republic to meet several new demands. Under Caesar, an invasion fleet was assembled in the English Channel to allow the invasion of Britannia; under Pompey, a large fleet was raised in the Mediterranean Sea to clear the sea of Cilician pirates. During the civil war that followed, as many as a thousand ships were either constructed or pressed into service from Greek cities.
As with most ancient civilisations, Rome's military served the triple purposes of securing its borders, exploiting peripheral areas through measures such as imposing tribute on conquered peoples, and maintaining internal order. From the outset, Rome's military typified this pattern and the majority of Rome's campaigns were characterised by one of two types. The first is the territorial expansionist campaign, normally begun as a counter-offensive, in which each victory brought subjugation of large areas of territory. The second is the civil war, of which examples plagued the Roman Republic in its final century.
Roman armies were not invincible, despite their formidable reputation and host of victories. Over the centuries the Romans "produced their share of incompetents who led Roman armies into catastrophic defeats. Nevertheless, it was generally the fate of even the greatest of Rome's enemies, such as Pyrrhus and Hannibal, to win the battle but lose the war. The history of Rome's campaigning is, if nothing else, a history of obstinate persistence overcoming appalling losses.
After recovering surprisingly swiftly from the sack of Rome, the Romans immediately resumed their expansion within Italy. The First Samnite War of between 343 BC and 341 BC was a relatively short affair: the Romans beat the Samnites in two battles, but were forced to withdraw from the war before they could pursue the conflict further due to the revolt of several of their Latin allies in the Latin War. Rome bested the Latins in the Battle of Vesuvius and again in the Battle of Trifanum, after which the Latin cities were obliged to submit to Roman rule.
The Second Samnite War, from 327 BC to 304 BC, was a much longer and more serious affair for both the Romans and Samnites. The fortunes of the two sides fluctuated throughout its course. The Romans then proved victorious at the Battle of Bovianum and the tide turned strongly against the Samnites from 314 BC onwards, leading them to sue for peace with progressively less generous terms. By 304 BC the Romans had effectively annexed the greater degree of the Samnite territory, founding several colonies.
Seven years after their defeat, with Roman dominance of the area looking assured, the Samnites rose again and defeated a Roman army in 298 BC, to open the Third Samnite War. With this success in hand they managed to bring together a coalition of several previous enemies of Rome. In the Battle of Populonia in 282 BC Rome finished off the last vestiges of Etruscan power in the region.
By the beginning of the third century, Rome had established itself as a major power on the Italian Peninsula, but had not yet come into conflict with the dominant military powers in the Mediterranean at the time: Carthage and the Greek kingdoms.
When a diplomatic dispute between Rome and a Greek colony erupted into open warfare in a naval confrontation, the Greek colony appealed for military aid to Pyrrhus, ruler of the northwestern Greek kingdom of Epirus. Motivated by a personal desire for military accomplishment, Pyrrhus landed a Greek army of some 25,000 men on Italian soil in 280 BC.
Despite early victories, Pyrrhus found his position in Italy untenable. Rome steadfastly refused to negotiate with Pyrrhus as long as his army remained in Italy. Facing unacceptably heavy losses with each encounter with the Roman army, Pyrrhus withdrew from the peninsula. In 275 BC, Pyrrhus again met the Roman army at the Battle of Beneventum. While Beneventum was indecisive, Pyrrhus realised his army had been exhausted and reduced, by years of foreign campaigns, and seeing little hope for further gains, he withdrew completely from Italy.
The conflicts with Pyrrhus would have a great effect on Rome. Rome had shown it was capable of pitting its armies successfully against the dominant military powers of the Mediterranean, and that the Greek kingdoms were incapable of defending their colonies in Italy and abroad. Rome quickly moved into southern Italia, subjugating and dividing the Greek colonies. Now, Rome effectively dominated the Italian peninsula, and won an international military reputation.
The First Punic War began in 264 BC when settlements on Sicily began to appeal to the two powers between which they lay - Rome and Carthage - in order to solve internal conflicts. The war saw land battles in Sicily early on, but the theatre shifted to naval battles around Sicily and Africa. Before the First Punic War there was no Roman navy to speak of. The new war in Sicily against Carthage, a great naval power, forced Rome to quickly build a fleet and train sailors.
The first few naval battles were catastrophic disasters for Rome. However, after training more sailors and inventing a grappling engine, a Roman naval force was able to defeat a Carthaginian fleet, and further naval victories followed. The Carthaginians then hired Xanthippus of Carthage, a Spartan mercenary general, to reorganise and lead their army. He managed to cut off the Roman army from its base by re-establishing Carthaginian naval supremacy. With their newfound naval abilities, the Romans then beat the Carthaginians in naval battle again at the Battle of the Aegates Islands and leaving Carthage without a fleet or sufficient coin to raise one. For a maritime power the loss of their access to the Mediterranean stung financially and psychologically, and the Carthaginians sued for peace.
Continuing distrust led to the renewal of hostilities in the Second Punic War when Hannibal Barca attacked a Spanish town, which had diplomatic ties to Rome. Hannibal then crossed the Italian Alps to invade Italy. Hannibal's successes in Italy began immediately, and reached an early climax at the Battle of Cannae, where 70,000 Romans were killed.
In three battles, the Romans managed to hold off Hannibal but then Hannibal smashed a succession of Roman consular armies. By this time Hannibal's brother Hasdrubal Barca sought to cross the Alps into Italy and join his brother with a second army. Hasdrubal managed to break through into Italy only to be defeated decisively on the Metaurus River. Unable to defeat Hannibal himself on Italian soil, the Romans boldly sent an army to Africa with the intention of threatening the Carthaginian capital. Hannibal was recalled to Africa, and defeated at the Battle of Zama.
Carthage never managed to recover after the Second Punic War and the Third Punic War that followed was in reality a simple punitive mission to raze the city of Carthage to the ground. Carthage was almost defenceless and when besieged offered immediate surrender, conceding to a string of outrageous Roman demands. The Romans refused the surrender, and the city was stormed after a short siege and completely destroyed. Ultimately, all of Carthage's North African and Spanish territories were acquired by Rome.
Rome's preoccupation with its war with Carthage provided an opportunity for Philip V of the kingdom of Macedon in northern Greece, to attempt to extend his power westward. Philip sent ambassadors to Hannibal's camp in Italy, to negotiate an alliance as common enemies of Rome. However, Rome discovered the agreement when Philip's emissaries were captured by a Roman fleet. The First Macedonian War saw the Romans involved directly in only limited land operations, but they ultimately achieved their objective of pre-occupying Philip and preventing him from aiding Hannibal.
Macedon began to encroach on territory claimed by several other Greek city states in 200 BC and these states pleaded for help from their newfound ally Rome. Rome gave Philip an ultimatum that he must submit Macedonia to being essentially a Roman province. Philip refused, and Rome declared war against Philip in the Second Macedonian War. Ultimately, in 197 BC, the Romans defeated Philip at the Battle of Cynoscephalae, and Macedonia was forced to surrender.
Rome now turned its attentions to another Greek kingdom, the Seleucid Empire, in the east. A Roman force defeated the Seleucids at the Battle of Thermopylae and forced them to evacuate Greece. The Romans then pursued the Seleucids beyond Greece, beating them in the decisive engagement of the Battle of Magnesia.
In 179 BC Philip died and his talented and ambitious son, Perseus, took his throne and showed a renewed interest in Greece. Rome declared war on Macedonia again, starting the Third Macedonian War. Perseus initially had greater military success against the Romans than his father. However, as with all such ventures in this period, Rome responded by simply sending another army. The second consular army duly defeated the Macedonians at the Battle of Pydna in 168 BC and the Macedonians duly capitulated, ending the Third Macedonian War.
The Fourth Macedonian War, fought from 150 BC to 148 BC, was the final war between Rome and Macedon. The Romans swiftly defeated the Macedonians at the Second battle of Pydna. Another Roman army besieged and destroyed Corinth in 146 BC, which led to the surrender and thus conquest of the rest of Greece.
In 121 BC, Rome came into contact with two Celtic tribes (from a region in modern France), both of which they defeated with apparent ease. The Cimbrian War (113-101 BC) was a far more serious affair than the earlier clashes of 121 BC. The Germanic tribes of the Cimbri and the Teutons migrated from northern Europe into Rome's northern territories, and clashed with Rome and her allies. At the Battle of Aquae Sextiae and the Battle of Vercellae both tribes were virtually annihalated, which ended the threat.
Between 135 BC and 71 BC there were three "Servile Wars" involving slave uprisings against the Roman state, the third and final uprising was the most serious. involving ultimately between 120,000 and 150,000 Additionally, in 91 BC the Social War broke out between Rome and its former allies in Italy over dissent among the allies that they shared the risk of Rome's military campaigns, but not its rewards. Although they lost militarily, the allies achieved their objectives with legal proclamations which granted citizenship to more than 500,000 Italians.
The internal unrest reached its most serious state, however, in the two civil wars that were caused by the consul Lucius Cornelius Sulla at the beginning of 82 BC. In the Battle of the Colline Gate at the very door of the city of Rome, a Roman army under Sulla bested an army of the Roman senate, entered the city, and marched on Rome. Sulla's actions marked a watershed in the willingness of Roman troops to wage war against one another that was to pave the way for the wars which ultimately overthrew the republic, and caused the founding of the Roman Empire.
The Second Mithridatic War began when Rome tried to annex a province that Mithridates claimed as his own. In the Third Mithridatic War, first Lucius Licinius Lucullus and then Pompey the Great were sent against Mithridates. Mithridates was finally defeated by Pompey in the night-time Battle of the Lycus.
The Mediterranean had at this time fallen into the hands of pirates, largely from Cilicia. The pirates not only strangled shipping lanes but also plundered many cities on the coasts of Greece and Asia. Pompey was nominated as commander of a special naval task force to campaign against the pirates. It took Pompey just forty days to clear the western portion of the sea of pirates and restore communication between Iberia (Spain), Africa, and Italy.
During a term as praetor in Iberia (modern Spain), Pompey's contemporary Julius Caesar defeated two local tribes in battle. Following his term as consul in 59 BC, he was then appointed to a five year term as the proconsular Governor of Cisalpine Gaul (current northern Italy), Transalpine Gaul (current southern France) and Illyria (the modern Balkans). Not content with an idle governorship, Caesar strove to find reason to invade Gaul, which would give him the dramatic military success he sought. When two local tribes began to migrate on a route that would take them near (not into) the Roman province of Transalpine Gaul, Caesar had the barely sufficient excuse he needed for his Gallic Wars, fought between 58 BC and 49 BC.
Caesar defeated large armies at major battles 58 BC and 57 BC. In 55 and 54 BC he made two expeditions into Britain, becoming the first Roman to do so. Caesar then defeated a union of Gauls at the Battle of Alesia, completing the Roman conquest of Transalpine Gaul. By 50 BC, the entirety of Gaul lay in Roman hands. Gaul never regained its Celtic identity, never attempted another nationalist rebellion, and remained loyal to Rome until the fall of the western empire in 476.
By the spring of 49 BC, when Caesar crossed the Rubicon river with his invading forces and swept down the Italian peninsula towards Rome, Pompey ordered the abandonment of Rome. Caesar first directed his attention to the Pompeian stronghold of Iberia (modern Spain) but decided to tackle Pompey himself in Greece. Pompey initially defeated Caesar, but failed to follow up on the victory. Pompey was then decisively defeated at the Battle of Pharsalus in 48 BC, despite outnumbering Caesar's forces two to one. Pompey fled again, this time to Egypt, where he was murdered.
Pompey's death did not result in an end to the civil wars since initially Caesar's enemies were manifold and Pompey's supporters continued to fight on after his death. In 46 BC Caesar lost perhaps as much as a third of his army, but ultimately came back to defeat the Pompeian army of Metellus Scipio in the Battle of Thapsus, after which the Pompeians retreated yet again to Iberia. Caesar then defeated the combined Pompeian forces at the Battle of Munda.
Despite his military success, or probably because of it, fear spread of Caesar, now the primary figure of the Roman state, becoming an autocratic ruler and ending the Roman Republic. This fear drove a group of senators to assassinate him in March of 44 BC. Further civil war followed between those loyal to Caesar and those who supported the actions of the assassins. Caesar's supporter Mark Antony condemned Caesar's assassins and war broke out between the two factions. Antony was denounced as a public enemy, and Caesar's adopted son and chosen heir, Gaius Octavian, was entrusted with the command of the war against him. At the Battle of Mutina Antony was defeated by the consuls Hirtius and Pansa, who were both killed.
Octavian came to terms with Caesarians Antony and Lepidus in 43 BC when the Second Triumvirate was formed. In 42 BC Triumvirs Mark Antony and Octavian fought the Battle of Philippi with Caesar's assassins Brutus and Cassius. Although Brutus defeated Octavian, Antony defeated Cassius, who committed suicide. Brutus also committed suicide shortly afterwards.
However, civil war flared again when the Second Triumvirate of Octavian, Lepidus and Mark Antony failed. The ambitious Octavian built a power base of patronage and then launched a campaign against Mark Antony. At the naval Battle of Actium (off the coast of Greece), Octavian decisively defeated Antony and Cleopatra. Octavian was granted a series of special powers including sole imperium within the city of Rome, permanent consular powers and credit for every Roman military victory, since all future generals were acting under his command. In 27 BC Octavian was granted the use of the names Augustus and Princeps indicating his primary status above all other Romans, and he adopted the title Imperator Caesar making him the first Roman Emperor. | http://www.reference.com/browse/Catilinarian | 13 |
32 | ANTEBELLUM TEXAS. In the drama of Texas history the period of early statehood, from 1846 to 1861, appears largely as an interlude between two great adventures-the Republic of Texas and the Civil War.qqv These fifteen years did indeed lack the excitement and romance of the experiment in nationhood and the "Lost Cause" of the Confederacy. Events and developments during the period, however, were critical in shaping the Lone Star State as part of the antebellum South. By 1861 Texas was so like the other Southern states economically, socially, and politically that it joined them in secession and war. Antebellum Texans cast their lot with the Old South and in the process gave their state an indelibly Southern heritage.
When President Anson Jones lowered the flag of the republic for the last time in February 1846, the framework for the development of Texas over the next fifteen years was already constructed. The great majority of the new state's approximately 100,000 white inhabitants were natives of the South, who, as they settled in the eastern timberlands and south central plains, had built a life as similar as possible to that experienced in their home states. Their economy, dependent on agriculture, was concentrated first on subsistence farming and herding and then on production of cotton as a cash crop. This meant the introduction of what southerners called their "Peculiar Institution"-slaveryqv. In 1846 Texas had more than 30,000 black slaves and produced an even larger number of bales of cotton (see COTTON CULTURE). Political institutions were also characteristically Southern. The Constitution of 1845, written by a convention in which natives of Tennessee, Virginia, and Georgia alone constituted a majority, depended heavily on Louisiana's fundamental law as well as on the existing Constitution of the Republic of Texas. As befitted an agricultural state led by Jacksonians, the constitution prohibited banking and required a two-thirds vote of the legislature to charter any private corporation. Article VIII guaranteed the institution of slavery.
With the foundations of their society in place and the turbulence of the republic behind them, Texans in 1846 anticipated years of expansion and prosperity. Instead, however, they found themselves and their state's interests heavily involved in the war between Mexico and the United States that broke out within a few months of annexation (see MEXICAN WAR). Differences between the two nations arose from a variety of issues, but disagreement over the southwestern boundary of Texas provided the spark for war. Mexico contended that Texas reached only to the Nueces River, whereas after 1836 the republic had claimed the Rio Grande as the border. President James K. Polk, a Jacksonian Democrat from Tennessee, backed the Texans' claims, and in January 1846, after unsuccessful attempts to make the Rio Grande the boundary and settle other differences by diplomacy, he ordered Gen. Zachary Taylor to occupy the disputed area. In March Taylor moved to the Rio Grande across from Matamoros. Battles between his troops and Mexican soldiers occurred north of the river in May, and Congress, at Polk's request, declared war. Approximately 5,000 Texans served with United States forces in the conflict that followed, fighting for both General Taylor in northern Mexico and Gen. Winfield Scott on his campaign to capture Mexico City. In the Treaty of Guadalupe Hidalgo, which ended the war in February 1848, Mexico recognized Texas as a part of the United States and confirmed the Rio Grande as its border.
Victory in the Mexican War soon led to a dispute concerning the boundary between Texas and the newly acquired Mexican Cession. This conflict arose from the Lone Star State's determination to make the most of the Rio Grande as its western boundary by claiming an area reaching to Santa Fe and encompassing the eastern half of what is now New Mexico. In March 1848 the Texas legislature decreed the existence of Santa Fe County, and Governor George T. Wood sent Spruce M. Baird to organize the local government and serve as its first judge. The people of Santa Fe, however, proved unwilling to accept Texas authority, and United States troops in the area supported them. In July 1849, after failing to organize the county, Baird left. At the same time a bitter controversy was developing in Congress between representatives of the North and the South concerning the expansion of slavery into the territory taken from Mexico. The Texans' western boundary claims became involved in this larger dispute, and the Lone Star State was drawn into the crisis of 1850 on the side of the South.
President Zachary Taylor, who took office in March 1849, proposed to handle the Mexican Cession by omitting the territorial stage and admitting California and New Mexico directly into the Union. His policy angered southerners in general and Texans in particular. First, both California and New Mexico were expected to prohibit slavery, a development that would give the free states numerical superiority in the Union. Second, Taylor's approach in effect pitted the federal government against Texas claims to the Santa Fe area and promised to stop the expansion of slavery at the Lone Star State's western boundary. Southern extremists resolved to break up the Union before accepting the president's proposals. They urged Texas to stand firm on the boundary issue, and the Mississippi state legislature called for a convention in Nashville during June 1850 "to devise and adopt some means of resistance" to Northern aggression. Ultra-Southern spokesmen in Texas took up the cry, demanding that their state send delegates to Nashville and take all steps necessary to prove that it was not "submissionist."
In December 1849 the Texas legislature responded to the crisis with an act designating new boundaries for Santa Fe County, and Robert S. Neighbors was sent to organize the government there. The legislature also provided for the election in March 1850 of eight delegates to attend the Nashville convention for "consultation and mutual action on the subject of slavery and Southern Rights." By June, when Neighbors reported that the people of Santa Fe did not want to be part of Texas, the state appeared ready to take aggressive action. Moderation prevailed, however, in Washington, Nashville, and Texas. By September 1850 Congress had worked out a compromise to settle the crisis. After much wrangling, Senator James A. Pearce of Maryland proposed that the boundary between Texas and New Mexico be a line drawn east from the Rio Grande along the 32d parallel to the 103d meridian, then north to 36°30', and finally east again to the 100th meridian. In return for its New Mexican claims, Texas would receive $10 million in United States bonds, half of which would be held to satisfy the state's public debt. Some Texans bitterly opposed the "Infamous Texas Bribery Bill," but extremism was on the wane across the state and the South as a whole. In Texas the crisis had aroused the Unionism of Sam Houston, the state's most popular politician. He made fun of the election to choose delegates to the Nashville convention. The vote had been called too late to allow effective campaigning anyhow, and of those elected only former governor J. Pinckney Henderson actually attended the meeting in Tennessee. (Incidentally, in this same election Texans approved the permanent choice of Austin as state capital.) The Nashville convention, although it urged Texas to stand by its claim to New Mexico, generally adopted a moderate tone. In November 1850 Texans voted by a two-to-one margin to accept the Pearce Bill (see COMPROMISE OF 1850).
The crisis of 1850 demonstrated the existence of strong Unionist sentiment in Texas, but it also revealed that the Lone Star State, in spite of its location on the southwestern frontier, was identified with the Old South. Charles C. Mills of Harrison County summarized this circumstance perfectly in a letter to Governor Peter H. Bell during the crisis: "Texas having so recently come into the Union, should not be foremost to dissolve it, but I trust she will not waver, when the crisis shall come."
As the boundaries of antebellum Texas were being settled and its identity shaped during the first years of statehood, new settlers poured in. A state census in 1847 reported the population at 142,009. Three years later a far more complete United States census (the first taken in Texas) enumerated 212,592 people, excluding Indians, in the state. Immigrants arriving in North Texas came primarily from the upper South and states of the old Northwest such as Illinois. Settlers entering through the Marshall-Jefferson area and Nacogdoches were largely from the lower South. On the Gulf Coast, Galveston and Indianola served as entry points for many lower southerners. Numerous foreign-born immigrants, especially Germans, also entered through these ports during the late 1840s.
The Texas to which these migrants came was a frontier state in the classic sense. That is, it had a line of settlement advancing westward as pioneers populated and cultivated new land. Also, as in most American frontiers, settlers faced problems with Indians. By the late 1840s Texas frontiersmen had reached the country of the fierce Comanches and were no doubt relieved that, since annexation, the task of defending the frontier rested with the United States Army. In 1848–49 the army built a line of eight military posts from Fort Worth to Fort Duncan, at Eagle Pass on the Rio Grande. Within two years, under the pressure to open additional lands and do a better job of protecting existing settlements, federal forces built seven new forts approximately 100 miles to the west of the existing posts. This new line of defense, when completed in 1852, ran from Fort Belknap, on the Brazos River, to Fort Clark, at the site of present-day Brackettville. Conflict with the Comanches continued for the remainder of the decade as federal troops, joined at times by companies of Texas Rangersqv, sought to protect the frontier. They were never entirely successful, however, and Indian warfare continued after the Civil War. With the Comanches and the lack of water and wood on the western plains both hampering its advance, the Texas frontier did not move during the 1850s beyond the seven forts completed at the onset of the decade. Areas immediately to the east of the military posts continued to fill, but the rush westward slowed. In 1860 the line of settlement ran irregularly from north to south through Clay, Young, Erath, Brown, Llano, Kerr, and Uvalde counties.
Important as it was to antebellum Texas, this western frontier was home to only a small fraction of the state's population. The great majority lived well to the east in areas where moving onto unclaimed land and fighting Indians were largely things of the past by 1846. These Texans, not frontiersmen in the traditional sense, were yet part of an extremely significant frontier-the southwesterly march of slaveholding, cotton-producing farmers and planters. "King Cotton" ruled the Old South's agricultural economy, and he came to rule antebellum Texas as well. Anglo-American settlers had sought from the beginning to build a plantation society in the region stretching from the Red River through the East Texas timberlands to the fertile soils along the Trinity, Brazos, Colorado, and lesser rivers that emptied into the Gulf of Mexico. During the 1850s this cotton frontier developed rapidly.
At the census of 1850, 95 percent of the 212,592 Texans lived in the eastern two-fifths of the state, an area the size of Alabama and Mississippi combined. Ten years later, although the state's population had grown to 604,215, the overwhelming majority still lived in the same region. The population had far greater ethnic diversity than was common elsewhere in the South. There were large numbers of Germans in the south central counties, many Mexican Americans from San Antonio southward, and smaller groups of Poles, Czechs,qqv and other foreign-born immigrants scattered through the interior. Nevertheless, natives of the lower South constituted the largest group of immigrants to Texas during the 1850s, and southerners headed three of every four households there in 1860. Like immigrants from the Deep South, slaves also constituted an increasingly large part of the Lone Star State's population (27 percent in 1850 and 30 percent in 1860). Their numbers rose from 58,161 to 182,566, a growth of 214 percent, during the decade.
The expansion of slavery correlated closely with soaring cotton production, which rose from fewer than 60,000 bales in 1850 to more than 400,000 in 1860. In 1850 of the nineteen counties having 1,000 or more slaves-ten in northeastern Texas and nine stretching inland along the Brazos and Colorado rivers-fifteen produced 1,000 or more bales of cotton. The census of 1860 reported sixty-four counties having 1,000 or more slaves, and all except eight produced 1,000 or more bales. These included, with the exception of an area in extreme Southeast Texas, virtually every county east of a line running from Fannin County, on the Red River, southwestward through McLennan County to Comal County and then along the San Antonio River to the Gulf. Only six counties in this area managed to grow at least 1,000 bales of cotton without a matching number of slaves.
Slavery and cotton thus marched hand-in-hand across antebellum Texas, increasingly dominating the state's agricultural economy. Plantations in Brazoria and Matagorda counties produced significant sugar crops, but elsewhere farmers and planters concentrated on cotton as a source of cash income. By 1860 King Cotton had the eastern two-fifths of Texas, excepting only the north central prairie area around Dallas and the plains south of the San Antonio River, firmly within his grasp.
Perhaps, as Charles W. Ramsdell suggested, the cotton frontier was approaching its natural limits in Texas during the 1850s. The soil and climate of western Texas precluded successful plantation agriculture, and proximity to Mexico, with its offer of freedom for runaways, reinforced these geographical limitations. In reality, however, regardless of these apparent natural boundaries, slavery and cotton had great potential for continued expansion in Texas after 1860. Growth had not ended anywhere in the state at that time, and the north central prairie area had not even been opened for development. The fertile soils of the Blackland Prairie and Grand Prairie counties would produce hundreds of thousands of bales of cotton once adequate transportation reached that far inland, and railroads would soon have met that need. The two prairie regions combined were more than three-fourths as large as the state of South Carolina but had only 6 percent as many slaves in 1860. The cotton frontier of antebellum Texas constituted a virtual empire for slavery, and such editors as John F. Marshall of the Austin State Gazette wrote confidently of the day when the state would have two million bondsmen or even more.
Only a minority of antebellum Texans, however, actually owned slaves and participated directly in the cash-crop economy. Only one family in four held so much as a single slave, and more than half of those had fewer than five bondsmen. Small and large planters, defined respectively as those owning ten to nineteen and twenty or more slaves, held well over half of the state's slaves in both 1850 and 1860. This planter class profited from investments in land, labor, and cotton and, although a decided minority even among slaveholders, provided the driving force behind the state's economy.
Agriculture developed rapidly in antebellum Texas, as evidenced by a steady expansion in the number of farms, the amount of improved acreage, the value of livestock, and the size of crops produced. Slave labor contributed heavily to that growth. On the other hand, during the 1850s Texas developed very slowly in terms of industry, commerce, and urban growth. In both 1850 and 1860 only about 1 percent of Texas family heads had manufacturing occupations. Texas industries in 1860 produced goods valued at $6.5 million, while, by contrast, Wisconsin, another frontier state that had entered the Union in 1846, reported nearly $28 million worth of manufactures. Commercial activity, retarded no doubt by inadequate transportation and the constitutional prohibition on banking (see BANKS AND BANKING), also occupied only a small minority (less than 5 percent) of Texans. With industry and commerce so limited, no urban area in the state reached a population of 10,000 during the antebellum years. In 1860 San Antonio (8,200), Galveston (7,307), Houston (4,800), and Austin (3,500) were the state's only "cities." By contrast, Milwaukee, Wisconsin, reported a population of 20,000 as early as 1850.
Antebellum Texans failed to diversify their economy for several reasons. Part of the explanation was geographical: climate and soil gave Texas an advantage over most regions of the United States, certainly those outside the South, in plantation agriculture and thus helped produce an overwhelmingly agricultural economy. Slavery appears also to have retarded the rise of industry and commerce. Slave labor made the plantation productive and profitable and reduced the need for the invention and manufacture of farm machinery. Planters concentrated on self-sufficiency and on the cultivation of cotton, a crop that quickly passed out of Texas for processing elsewhere with a minimum involvement of local merchants along the way. Opportunities for industry and commerce were thus reduced by the success of the plantation. Moreover, the planters, who were, after all, the richest and most enterprising men in Texas and who would have had to lead any move to diversify the economy, benefited enough financially and socially from combining land and slave labor that they generally saw no need to risk investments in industry or commerce.
Planters did have an interest in improving transportation in their state. From the 1820s onward Texans had utilized the major rivers from the Red River to the Rio Grande to move themselves and their goods and crops, but periodic low water, sand bars, and rafts of logs and brush made transportation by water highly unreliable. Moving supplies and cotton on Texas roads, which became quagmires in wet weather, was simply too slow and expensive. Thus, as the cotton frontier advanced inland, the movement of crops and supplies, never an easy matter, became increasingly difficult. Railroads offered a solution, albeit not without more financial difficulties than promoters could imagine. The state legislature chartered the state's first railroad, the Buffalo Bayou, Brazos and Colorado, in February 1850. Intended to run from Harrisburg, near Houston, westward to Alleyton, on the Colorado River, and tap the commerce on both the Brazos and Colorado, this road became operational to Stafford's Point in 1853 and reached its destination by 1860. Dozens of other railroads received charters after 1850, but for every one that actually operated six came to nothing.
Railroad promoters, faced with a difficult task and armed with arguments about the obvious importance of improved transportation in Texas, insisted that the state should subsidize construction. Their efforts to gain public aid for railroad corporations focused on obtaining land grants and using the United States bonds acquired in the settlement of the New Mexico boundary as a basis for loans. Some Texans, however, led by Lorenzo Sherwood, a New York-born lawyer who lived in Galveston, opposed the whole concept of state subsidies for private corporations. Sherwood developed a State Plan calling for the government in Austin to construct and own a thousand-mile network of railroads. Those who favored private promoters managed early in 1854 to obtain a law authorizing the granting of sixteen sections of land for each mile of road built to all railroads chartered after that date. However, the struggle between those who favored loans and supporters of the State Plan continued into 1856, as Sherwood won election to the legislature and continued to fight effectively for his ideas. His opponents finally seized upon statements Sherwood made against reopening the African slave trade, accused him of opposing slavery, and forced him under the threat of violence to resign from the legislature. Within less than a month, in July 1856, the legislature passed a bill authorizing loans of $6,000 to railroad companies for every mile of road built.
Antebellum Texans thus decided that private corporations encouraged by state aid would built their railroads. Progress was limited, however. By 1860 the state had approximately 400 miles of operating railroad, but almost all of it radiated from Houston. Major lines included the Buffalo Bayou, Brazos and Colorado, from Harrisburg to Alleyton through Houston; the Galveston, Houston and Henderson, from Galveston to Houston; and the Texas and New Orleans, from Houston to Orange through Beaumont. Only the San Antonio and Mexican Gulf Railway, which ran from Port Lavaca to Victoria, and the Southern Pacific Railroad (not to be confused with the future system of that name) in Harrison County did not connect in some fashion with Houston. Railroad building progressed slowly because antebellum Texas did not have the native capital to finance it, the industrial base to produce building materials, or the population and diversified economy to provide traffic the year around. At least the stage had been set, however, for building an adequate network of rail transportation after 1865.
Thus, as the cotton frontier of Texas developed during the 1850s, the state's economy increasingly mirrored that of the Deep South. A majority of Texans lived as small, nonslaveholding farmers, but plantation agriculture and slave labor produced the state's wealth and provided its economic leaders. At the same time, there was little development in terms of industry, commerce, urban growth, and transportation. With an economy of this nature and a Southern-born population predominant in most areas, antebellum Texas naturally developed social practices and institutions that also were Southern to the core.
Women in antebellum Texas found their role in society shaped by traditions that, while by no means unique to the South, were strongly entrenched in that region. The ideal female was a homemaker and mother, pious and pure, strong and hardworking, and yet docile and submissive. She was placed on a pedestal and admired, but she had no political rights and suffered serious disabilities before the law. Women could not, for example, serve on juries, act as lawyers, or witness a will. Texas women, however, did enjoy significant property rights. Married women retained title to property such as land and slaves owned before they wed, had community rights to all property acquired during a marriage, and had full title to property that came into their hands after divorce or the death of a husband. These rights allowed Texas women to head families, own plantations, and manage estates in ways that were anything but passive and submissive.
Antebellum Texans favored churches in the evangelical tradition of the Old South. Methodists far outnumbered other denominations. By 1860 the Methodist Episcopal Church, South, as it was called after the North-South split of 1844, had 30,661 members. Baptists constituted the second largest denomination, followed by the Presbyterian, Christian, Cumberland Presbyterian, Catholic, Lutheran, and Episcopal churches. These institutions provided spiritual and moral guidance and offered educational instruction as well. Moreover, religious activities brought people together in settings that encouraged friendly social interchange and relieved the isolation of rural life.
Education in antebellum Texas was largely a matter of private enterprise, both secular and church affiliated. At the most basic level, would-be teachers simply established common schools and offered primary and elementary instruction to children whose parents could pay tuition. More formal education took place in state-chartered institutions, which often bore names promising far more than they could deliver. Between 1846 and 1861 the Texas legislature chartered 117 schools, including forty academies, thirty colleges, and seven universities. Most of these institutions lasted only a few years, had relatively few students, and, regardless of their titles, offered little beyond secondary education. The University of San Augustine and Marshall University, for example, both chartered in 1842, had primary departments teaching reading, writing, and arithmetic. The quality of education at all levels in Texas schools suffered from a variety of problems, including the fact that teachers who were dependent for their pay on the good will of parents could not afford to be very demanding. Schools often covered their shortcomings and bolstered their academic reputations by holding public oral examinations that the whole community could attend. Parents and most observers greatly appreciated these events and overlooked the fact that generally they were watching rehearsed performances rather than true examinations.
Regardless of its doubtful quality, private school education lay beyond the means of most antebellum Texas families. In general only the well-to-do could afford to buy schooling for their children, a situation that conflicted with democratic ideals and growing American faith in education. Texans expressed considerable interest during the 1850s in establishing a free public school system. Action came, however, only after the legislature devised a scheme to establish a fund that could be used for loans to promote railroad building, with the interest going to support public schools. In January 1854 the legislature set aside $2 million of the bonds received from the boundary settlement in 1850 (see BOUNDARIES OF TEXAS and COMPROMISE OF 1850) as a "Special School Fund." Two years later another act provided for loans from this fund to railroad corporations. Interest from the school fund was to go to the counties on a per-student basis to pay the salaries of public school teachers, but counties had to provide all the necessary buildings and equipment. Knowing that this would be expensive and doubtless feeling pressure from private school interests, the legislature permitted local authorities to hire teachers in existing educational institutions. It quickly became apparent that the interest from the school fund would be totally inadequate to do more than subsidize the schooling of children from indigent families. The private schools benefited, and public education remained only a dream. (see HIGHER EDUCATION.)
Educational opportunities notwithstanding, literacy, at least as measured by census enumerators, was high in antebellum Texas. The state's many newspapers (three dailies, three triweeklies, and sixty-five weeklies by 1860) constituted the most widely available reading matter. Among the most influential publications were the Telegraph and Texas Register, the Clarksville Northern Standard, the Marshall Texas Republican, the Nacogdoches Texas Chronicle, the Austin State Gazette,qqv the Dallas Weekly Herald (see DALLAS TIMES-HERALD), and the Galveston Daily News (see GALVESTON NEWS). What the papers lacked in news-gathering facilities they made up for with colorful editors and political partisanship. Virtually anyone who cared to could find both information on current events and entertainment in an antebellum newspaper.
Texans had a notable variety of amusements. Amateur theater groups, debating societies, and music recitals, for example, provided cultural opportunities. Many other amusements were notably less genteel. Horse racing, gambling, and drinking were popular, the last to such a degree that the temperance crusade against liquor was by far the most important reform movement of the era. Cruder amusements often sparked violence, although antebellum Texans needed very little provocation. The constitution had outlawed duels, the Old South's traditional method of settling affairs of honor, but violence in Texas was generally more spontaneous and less stylized, anyhow. In June 1860, for example, a man named Johnson spotted on the street in Hempstead one McNair, with whom he had a long-standing quarrel. Firing three times from his second-floor hotel room window, he hit McNair in the neck, side, and thigh. As Johnson prepared to ride away, a crowd gathered around the dying McNair. "By God, a good shot that," one said.
Politics in antebellum Texas reflected the state's preeminently Southern economic and social structure. Institutionally, political arrangements were highly democratic by the standards of that era. The Constitution of 1845 permitted all adult white males, without regard to taxpaying or property-holding status, to vote and hold any state or local office. In practice, however, wealthy slaveholders dominated officeholding at all levels and provided the state's political leadership. Their control was democratic in that they were freely elected, and they governed without having to coerce nonslaveholders into supporting their policies. Nevertheless, leadership by a minority class whose status depended on the ownership of slaves introduced an element of aristocracy and gave a pro-Southern cast to antebellum Texas politics.
Virtually all of the men who governed Texas from 1846 to 1861 were identified with the Democratic party. "We are all Democrats," Guy M. Bryan wrote in 1845, "since the glorious victory of that party, who fearlessly espoused our cause and nailed the `Lone Star' to the topmast of their noble ship." When the Whig party displayed a lack of enthusiasm for the Mexican War and supported President Zachary Taylor in denying Texas claims to New Mexico territory in 1849–50, Bryan's statement became even more accurate. The Democrats won every presidential and gubernatorial election between 1845 and 1861. Indeed, so complete was their domination that the closest contests during these years came as a result of intraparty divisions, usually with the towering figure of Sam Houston occupying center stage.
J. Pinckney Henderson easily won the first race for state governor in December 1845 and took office in February 1846. He presided over the transition from republic to state and spent the latter part of 1846 commanding Texas troops in Mexico. Worn out from the war and in failing health, Henderson declined in 1847 to run for reelection. He was succeeded by George T. Wood, a Trinity River planter who had the support of Sam Houston. Wood served from 1847 to 1849, as the dispute over the New Mexico boundary built to crisis proportions. During his term Texans participated in their first presidential election and gave Democrat Lewis Cass 69 percent of the vote in his contest with the Whig Zachary Taylor. Wood lost the governorship to Peter Hansborough Bell in 1849, probably because of lukewarm support from Houston and Bell's promise of a more aggressive policy on the boundary question. The Compromise of 1850, although considered a shameful surrender by some extremists, did not seriously injure Bell's pro-Southern reputation. He defeated four opponents in 1851, including the Whig Benjamin H. Epperson, and served a second term before resigning in 1853 to take a seat in Congress. In the meantime the Democratic presidential candidate, Franklin Pierce, carried Texas overwhelmingly in the election of 1852. The Whigs made their most serious bid for the governorship in 1853 with the candidacy of William B. Ochiltree. Democrats met this challenge by agreeing to support one man, Elisha M. Pease, rather than their usual multiplicity of candidates. Pease's first term was significant for efforts to start a public school system and encourage railroad building. It also marked the appearance of a new political party that offered the most serious threat to Democratic domination of state politics during the 1850s. The American (Know-Nothing) party, an antiforeign, anti-Catholic organization that had originated in the Northeast, appeared in Texas during 1855 and attracted many Whigs, whose party had disintegrated as a result of the Kansas-Nebraska Act in 1854. The Know-Nothings supported Lieutenant Governor David C. Dickson for governor in 1855 and forced the Democrats to call a hurried state convention and unify in support of Governor Pease. The new party had considerable success in legislative and local elections, but Pease defeated Dickson with relative ease. The Know-Nothings lost badly in their support of Millard Fillmore during the presidential race of 1856 and rapidly withered into insignificance thereafter.
During Pease's second term (1855–57), Texas politics came to focus on pro- and anti-Houston issues as they had not since the end of the republic. Senator Houston's consistent Unionism in the crisis of 1850 and in voting against the Kansas-Nebraska Act greatly irritated ultra-Southern Democrats in Texas. A flirtation with the Know-Nothings had the same effect. Believing that he would not be reelected to the Senate when his term ended in 1859, Houston decided to run for governor in 1857 as an independent. The regular Democrats nominated Hardin R. Runnels, a native Mississippian with strong states'-rights beliefs, and a bitter campaign followed. Houston presented himself as a champion of the Union and his opponents as disunionists, while regular Democrats said that Old Sam was a Free-Soil traitor to Texas. Runnels won by a vote of 32,552 to 23,628, handing Houston the only defeat he ever suffered in a major political campaign. As governor, Runnels pursued an aggressive policy toward Indians in Northwest Texas, and there was more bloodshed on the frontier in 1858–59 than at any other time since 1836. The Comanches, although pushed back, mounted destructive raids on exposed settlements in 1859, creating considerable dissatisfaction with the Runnels administration. Also, during Runnels's term sectional tensions increased as the governor endorsed an extreme version of states' rights, and leading Democrats, including John Marshall, state party chairman and editor of the Austin State Gazette, advocated ultra-Southern policies such as reopening the African slave trade.
These developments under Runnels set the stage for another bitter and exciting gubernatorial contest in 1859. The regular Democrats renominated Runnels and Lieutenant Governor Francis R. Lubbock on an ultra-Southern platform, while Houston and Edward Clarkqv opposed them by running as Independent or Union Democrats. This time, in his last electoral contest, Houston defeated Runnels, 36,227 to 27,500. The victory may have resulted in part from a lack of pro-Southern extremism among Texas voters, but Houston's personal popularity and the failure of Runnels's frontier policy played key roles, too. In any case, the state legislature's choice of Louis T. Wigfall as United States senator only two months after the gubernatorial election demonstrated that Unionism by no means had control in Texas. Wigfall, a native of South Carolina, was a fire-eating secessionist and one of Houston's bitterest enemies in Texas.
Houston's inaugural address, which he delivered publicly rather than to the hostile legislature, concluded with a plea for moderation. "When Texas united her destiny with that of the United States," he said, "she entered not into the North, nor South. Her connection was not sectional, but national." The governor was at least partly correct about the attitude at the time of annexation, but by 1860 Texas had become so much a part of the Old South that not even Sam Houston could restrain the state's rush toward secession. Ultrasoutherners controlled the Democratic state convention in 1860 and sent a delegation headed by Runnels and Lubbock to the national convention in Charleston, South Carolina. Rather than accept a platform favoring Stephen A. Douglas, the northern Democrat who called for popular sovereignty to decide the matter of slavery in the territories, the Texans joined other Deep South delegations in walking out of the convention. This step opened a split in the Democratic party that soon resulted in the nominations of Douglas by the Northern wing and John C. Breckinridge by the Southern. In the meantime the Republican party nominated Abraham Lincoln on a platform opposing the spread of slavery, and conservatives from the upper South formed a Constitutional Union party aimed at uniting those who wished to avoid disunion. Sam Houston received serious consideration for the presidential nomination of the new party but lost to John Bell of Tennessee.
Regular Democrats in Texas supported Breckinridge and threatened immediate secession if the "Black Republican" Abraham Lincoln won. The Opposition, as those who opposed the Democrats were now called, turned to Bell and the Constitutional Unionists in the hope of preventing disunion. This group, which generally sought to carry on the traditional unionism of the Whigs, Know-Nothings, and Houston Independent Democrats, did not oppose slavery or Southern interests. They simply argued that secession amounted to revolution and would probably hasten the destruction of slavery rather than protect it. A minority from the outset, the Opposition saw their cause weakened further during the late summer of 1860 by an outbreak of public hysteria known as the Texas Troubles. The panic began with a series of ruinous fires in North Texas. Spontaneous ignition of phosphorous matches due to extremely hot weather may have caused the fires. Several masters, however, forced their slaves to confess to arson, and Texans decided that a massive abolitionist-inspired plot threatened to destroy slavery and devastate the countryside. Slave and white suspects alike fell victim to vigilante action before the panic subsided in September. "It is better," said one citizen of Fort Worth, "for us to hang ninety-nine innocent (suspicious) men than to let one guilty one pass." (see SLAVE INSURRECTIONS.)
In November 1860 Breckinridge defeated Bell in Texas by a vote of 47,458 to 15,463, carrying every county in the state except three-Bandera, Gillespie, and Starr. Abraham Lincoln received no votes in Texas, but free-state votes made him the president-elect, scheduled to take office in March 1861. His victory signaled the beginning of a spontaneous popular movement that soon swept the Lone Star State out of the Union. True to its antebellum heritage as a growing part of the cotton frontier, Texas stood ready in 1860 to join the other Southern states in secession and war.
Walter L. Buenger, Secession and the Union in Texas (Austin: University of Texas Press, 1984). Randolph B. Campbell and Richard G. Lowe, Wealth and Power in Antebellum Texas (College Station: Texas A&M University Press, 1977). Abigail Curlee, A Study of Texas Slave Plantations, 1822–1865 (Ph.D. dissertation, University of Texas, 1932). Earl Wesley Fornell, The Galveston Era: The Texas Crescent on the Eve of Secession (Austin: University of Texas Press, 1961). Llerena B. Friend, "The Texan of 1860," Southwestern Historical Quarterly 62 (July 1958). Robert Kingsley Peters, Texas: Annexation to Secession (Ph.D. dissertation, University of Texas at Austin, 1977). Charles W. Ramsdell, "The Natural Limits of Slavery Expansion," Mississippi Valley Historical Review 16 (September 1929). Ernest Wallace, Texas in Turmoil: The Saga of Texas, 1849–1875 (Austin: Steck-Vaughn, 1965).
The following, adapted from the Chicago Manual of Style, 15th edition, is the preferred citation for this article.Randolph B. Campbell, "ANTEBELLUM TEXAS," Handbook of Texas Online (http://www.tshaonline.org/handbook/online/articles/npa01), accessed May 18, 2013. Published by the Texas State Historical Association. | http://www.tshaonline.org/handbook/online/articles/npa01 | 13 |
25 | Timescale of the San Juan
by Norm Vance
This is a chronological history with brief historical references. For more detail see specific articles on this website.
This date is an apt starting place for our time scale of the San Juan because the area was 100% different than now. At this point in time the San Juan area was a shallow ocean. The beach was up around the present Canadian border. Almost all of the earth’s land mass was composed of separate “plates” which were clumped together on the other side of the planet in a gigantic area now called Gondwanaland. The land was barren but the seas were teeming with tiny life forms. (Check Florissant Fossil Beds Northeast of Gunnison.)
The enormous plates moved about the planet with several forming the continents of the western hemisphere. The San Juan area is now a desert but much of the earth’s surface is lush with plant life. This is the age of the dinosaur. (Check at Dinosaur Monument northwest of Montrose.)
200 to 80 MYA
Gold & SilverDuring this time many upheavals were caused by the Pacific Plate grinding into the plate that is North America. At times vast seaways filled much of the San Juan area bringing sand and clay resulting in layers of sandstone, shale, and various other layered formations. At about 80 MYA the pressure of the plate action became too great and large fractures occurred between the present Boulder, Colorado area, across the San Juan area and toward the southwest corner of the state. These deep cracks in the surface allowed molten minerals to flow upward. They flowed into empty spaces in the broken crust and cooled to solids. 80 million years later people mined into these and removed gold, silver, and many rare minerals. The area is known as the Colorado Mineral Belt. All but one of Colorado’s famous mines is located in this ancient area.
new set of plate pressures start pushing on the west side of the country’s land mass causing great folding and uplifting in the earth’s surface. This formed early mountains. For the next 40 million years a quiet period of erosion took place. This erosion carried material out of the higher lands and spread it around the mountain’s edges forming deserts.
Boom! Starting at 35 MYA and lasting almost ten million years great volcanic explosions blew holes the San Juan area. Huge mounds of volcanic ash and magma grew into new mountains. When empty of magma and energy the huge volcanoes collapsed leaving craters. New volcanoes grew and also collapsed until the craters overlapped into a moonscape.
The great volcano activity died down about 20 MYA. This was the end of the era of earth shaking events in the San Juan area. Molten minerals still leaked into fractures, the floors of old volcanoes settled and filled in, and slowly a time of peace came to the new baby mountains.
This peace was not a still time however, just slower. There was an enormous amount of heavy rock thrown out or pushed up into a rugged landscape. Gravity acted on the more unstable rock. Rock slides and rock movement both exposed new mountain peaks and covered other vast areas.
Other events went to work carving the San Juan. These were caused by wind, water and ice. The ice age caused a build up of an ice pack on mountains and in the valleys. These glaciers moved and scoured valley walls and floors. Some valleys became wide canyons. As sections of earth were slowly pushed upwards, rivers cut deep narrow gorges. Over vast periods of time, wear and erosion cut the San Juan Mountains and valleys into the shape we find today.
Slowly winds brought in soil components. Winds, birds and animals carried seeds and the area grew a vast array of plants.
The dinosaurs were gone by 65 million years ago. Other animals survived the early harshness of planet earth and were separated as the plates pulled the land mass apart. Great wooly mammoths, bison, early elk and deer, and a wide variety of animals had the San Juan to themselves for millions of years.
4 MYA to 30 TYA – Mankind
(Please note the time changes from millions of years ago to thousands of years ago.)
Man began and lived in what are now Africa, Asia, and Europe. The Americas were separated from the Eastern Hemisphere by the vast oceans of the world until about 30,000 to 50,000 years ago.
The last great Ice Age began about 60 TYA. The ice sheets took water from the oceans lowering them hundreds of feet. The ice build up projected down into the United States covering Canada completely. Higher elevation areas such as the San Juan developed glaciers which once again changed the landscape.
The Humanity Pump
Although it was the Ice Age, temperatures and weather still varied. Mankind had spread north and east in what is now Siberian Russia. Slowly, over generations, hunting tribes followed prey across the ice cap toward Alaska. The warmer times drew them north and east. Then a cooling period drove them south into Alaska and Canada. The ice age weather was a pump that pushed early man into the Americas. Given several thousand years these people populated to the southern tip of South America.
10 TYA Man Enters the San Juan
Just when the first man or tribe entered our San Juan area is unknown. Because mountain life is harsher than at lower elevation the early nomads populated the lower areas of America first. In the San Juan area early man first followed mammoth or bison up waterways higher and higher into the mountains during summer’s heat. These early people are called Paleo Indians. They are known as “The Great Hunters.” It is truly an adequate name considering they faced huge mammoths with crude stone tipped spears and clubs and hunted three quarters of the mammals to extension!
These early people lived in crude, temporary tent-like structures. Small tribes moved about an area according to the supply of food and the season. Their spears were stone tipped and these points and other stone tools are about all we know about them. All of the other material goods they possessed were organic and have long since rotted.
The general weather and temperature patterns began a slow change, becoming warmer. This caused a movement of Indian tribes across the southwest. The early hunters followed prey east to the plains. Tribes from the west and south moved into the area living mostly in the dry deserts of New Mexico and Arizona. This is known as the desert culture and they developed into the Anasazi.
The Anasazi lived in a large area of the southwest for 2000 years developing from nomadic hunters into farmers and city builders. They moved away from the San Juan area when severe drought conditions developed toward the end of the 1200′s. They became the current Pueblo culture. A couple of centuries later the Navajo and Ute Indians moved into the mostly empty San Juan area.
400 YA Close Encounter!
Europeans began sailing to this hemisphere beginning with Columbus. Later Spanish explorers landed in Mexico and pushed north and west working their way up the Rio Grande Valley. It was a close encounter for the Pueblo, Ute and Navajo who first saw a horse mounted Spaniard. They had never seen a horse or a non Indian before.
Trade was quickly set up between the Indians and the Spanish. The Spanish wanted furs from the Indians and the Spanish needed workers to use to herd sheep. The Indians wanted the wondrous horse. They traded hides and their children to the Spanish for horses which they often killed and ate. Anger and violence began soon as the whites moved onto more Indian land and the Indians began stealing horses and cattle. The Spanish treated the Indians harshly in an effort to change evey aspect of their culture and religion.
White men moved into the New Mexico area which was considerably easier to travel through than the mountains to the north. The Utes traded hides gathered in the San Juan. This ultimately caused a few early Europeans to venture into the southern borders of the area.
The San Juan became of great interest to the Spanish who had long expected to find civilizations in the area loaded with gold and silver. There was already the fur trade and information that traces of silver and gold had been found in the mountains to the north got their attention. The governor of New Mexico sent Juan Rivera to explore north of Santa Fe and into the mountains. He returned with still more inviting news of the high mountains.
200 YS -1776
At the same time our country’s founding fathers were preparing the Declaration of Independence for a group of states on the eastern coast, a small band of men and women set off from Santa Fe on a mission of adventure and exploration. The goal was to find a passable route from the Catholic missions in New Mexico to the newly established missions in California.
The leader of this expedition was Fryer Dominquez. It was Fryer Silvestre Escalante who recorded the activity and discoveries made. Escalante’s recordings were the first accurate descriptions of the southern San Juan. The mountains and roughness of the land caused exact locations to vary but maps were made and landmarks given names.
The terrain also was a challenge for the expedition. After skirting along the southern and western San Juan and making it to north of the Grand Canyon the expedition turned back returning to Santa Fe.
Following the Louisiana Purchase, President Jefferson sent several expeditions into the west. The famous Lewis and Clark group pushed across the Rocky Mountains and on to the Pacific taking a northerly route. Another group led by Zebulon Pike pushed into the San Luis Valley just east of the San Juan. This was Mexican territory and a Mexican army patrol arrested the Pike group.
Lewis and Clark, Pike, and others reported on the wild animal life in the mountains. Early trappers began working their way up the waterways into the mountains. These were powerful and strong willed men who began the “Mountain Man” legends. They not only had to face a rough natural environment but also the sometimes unfriendly Indians.
Support for trapping came from England. The English gentleman of the day simply had to have a beaver skin hat or two in his collection. This fad caused the trappers to face the harsh life. They left their names on many mountains, rivers, parks, and other landmarks.
A few of these hearty mountain men began to see a need for outposts to service the trapping business. Several trading posts and military forts sprang up. A minister, Joseph Williams, left a description of a fort on the Uintah River. He attempted to “save” the mountain men who he described as “fat, dirty, idle, greasy, drunken, swearing, and unbelievably wicked.” Preacher Williams saved himself by leaving the fort. By 1840 the beaver hat fad faded and the fur market crashed ending the big fur business but leaving the San Juan investigated by white men. Many of the trading post became towns and some the cities of today.
The colorful John Fremont was the next to explore the area.
Fremont had a long history of exploring and popularizing the mountains in his writings. He had problems with the army and was once court marshaled. Later, as a citizen he was hired to blaze a trail to California from St. Louis. He pushed his group into the San Juan from the east, just north of the present South Fork, Colorado. Fremont moved 120 mules and 32 men into the mountains in early winter. No mules and only 23 men made it back. Once again the San Juan turned back attempts of the white men to blaze a trail through it.
The Old Spanish Trail
The route followed by Dominguez and Escalante was traveled by others and became the historic Old Spanish Trail that ultimately linked Santa Fe with Los Angeles.
The Santa Fe Trail
In 1821 the Santa Fe Trail was opened between the east and Santa Fe. The combination of the Old Spanish and Santa Fe trails allowed new traffic into the area. The trails were used for decades by trappers, settlers, cowboys and other brave souls.
Spoils of War
In 1846-47 the United States and Mexico were at war. The United States defeated Mexico and won from Mexican rule the lands that became New Mexico, Arizona, California, Nevada, Utah and Colorado.
Pagosa became better known when in 1859, a party led by Captain John Macomb entered the area following the Old Spanish Trail. Macomb camped at and was awed by the Great Pagosa Hot Spring. He returned to the U.S. with news about his travels to the Great Pagosa and of gold deposits he had seen in Southwest Colorado rivers.
The Lure of Gold
The country was already deep into gold-fever because of the rush to California and the gold finds in Colorado west of Denver. The lure of gold in the San Juan began a rush that would forever change the area. The San Juan Mountains had been a barrier holding back exploration and settlers. Harsh winters and rugged terrain had been more than the Spanish or early Anglos wanted to deal with. Fortunes in gold was a different matter and soon after the news of streams gleaming with gold was received, the rush was on.
The area northwest of Pagosa Country saw the major gold and silver strikes. The towns of Silverton, Ouray, Tulluride, and others sprang to life overnight. At the same time Summitville, the largest mine in the Pagosa area, began production. Summitville is south of present day Wolf Creek Pass.
Pagosa with its well-traveled trails remained an access route and cross -roads. The Great Pagosa Hot Spring was well-known by travelers and miners. In the summer months of the early 1870′s the Great Pagosa Hot Spring was a welcome rest and recuperation spot.
In 1878 the U.S. Army moved to Pagosa Springs and built a fort on the bank of the San Juan River across from the Great Pagosa Hot Spring. The mostly blackmen, called “Buffalo Solders,” were there to protect miners and settlers from unhappy Indians.
In the early 1800’s there were many sheep herders in the Pagosa area. Sheep wondered the mountains in summer and were routed to the southern foothills in winter. Intermixed with sheep were cattle and large ranches developed across the area, but lumber became the main industry of the area.
The major railroad line by-passed Pagosa Springs to the south stopping at Pagosa Junction on its way from Chama to Durango. Train tracks were laid into many valleys and to Pagosa Springs. These were “spur lines” used to haul lumber and ultimately connected to the main line.
Wood from Pagosa was shipped to many diverse locations from home construction in Denver to chopsticks in Japan.
During World War I a better route from the San Luis Valley to the San Juan Valley resulted in Wolf Creek Pass being constructed.
The towns that now exist in the San Juan area are the towns that survived. Many settlements and towns began only to fade away as time passed. It is to the credit of the past residents of Pagosa Country that their courage and tenacity kept the area alive. In a harsh environment their spirit overcame the ups and downs of economy and nature.
Pagosa now has a multi-faceted economy and lifestyle. Cattle ranches still exist and the cowboy is very much a part of the area. Light industry can be found from small lumber mills to many cottage industries. Tourism and summer and winter sports are the main industry in modern times. | http://pagosasprings.com/timescale-of-the-san-juan/ | 13 |
107 | Life in the 1880's: The Economy of the 1880s
Author: Dorothy W. Hartman
Overall, the period following the Civil War and Reconstruction was marked by expansion on all fronts of the economy. Up to 1880, agriculture was the principal source of wealth in the U.S, but that was about to change. Immigration, primarily from Europe, and internal migration west, supplied the human capital for constructing railroads, and building and operating industries of unprecedented scale. This growth also gave rise to the new middle class, made a few very wealthy, and trapped masses of immigrants and unskilled laborers in lives of poverty.
Two philosophies of the day underscored and, to some extent, justified the accumulation of wealth and monopolization of certain sectors of the economy. The first was the long-held American belief in equal opportunity - that if one worked hard enough and applied his energies and talents in the marketplace, he would be successful. The ‘rags to riches’ fiction of Horatio Alger drew inspiration from this belief. The second belief, touted by capitalists and used politically to deter government regulation, derived from contemporary scientific theory of the day - Darwin’s Origin of Species. Darwin’s theory of the biological survival of the fittest was taken under the wing of capitalists and emerged as ‘laissez faire,’ the philosophy that, in a market left unfettered by government intervention and regulation, those corporations best suited to compete would thrive and others would fall away, thus strengthening the economy as a whole. At the same time, however, the very same industrial and financial leaders who used this platform to argue against regulation were the same men who sought and got government subsidies and other special treatment for railroad construction and other industrial infrastructure.
The age was also marked with the kinds of practical inventions that impacted everyday life - kerosene, the light bulb, the tin can, breakfast cereal and others. The population changed from a society of producers to one of consumers of mass marketed products churned out in factories far from home and marketed through new mail order catalogs.
Farm production was impacted by the introduction machines capable of plowing, planting and reaping thousands of times more that human labor could ever accomplish. As a result, more and more acres west of the Mississippi River came into large scale production and farming elsewhere became increasingly specialized and more commercial. Increasing production and decreasing demand for farm commodities caused prices to fall in the 1870s and 1880s in the face of rising costs associated with transportation and borrowing for land purchases, improvements and equipment. During this period, American farmers also became part of the growing global economy as countries like Australia, Argentina and Russia added their agricultural products to the market and therefore affecting prices on a world-wide basis.
Although the last quarter of the nineteenth century was generally considered to be one of continual, but rocky expansion, it was not without setbacks. Three economic declines, in 1873, 1884 and 1893, some more severe than others, marked the swings of the economic cycle. Prices in general trended downward after 1873, and lasted until 1896 or 1897. This deflation, resulting mostly from the failure of the money supply to keep pace with the rapid increase in the volume of goods produced, affected agricultural goods as well as manufactures. During 1883, at the start of that particular economic slowdown, 10, 299 businesses closed their doors. Not until 1886, after a slow but steady improvement, did economic conditions in general recover. Many citizens felt like they were living through a "great depression," even though production expanded nearly continuously until 1893, when a true depression hit the country.
Hallmark of the Age—Industrial Expansion
At the beginning of the Civil War, the United State’s industrial output, while increasing, did not come close to that of major European nations. By the end of the century, though, this country had become the leader in manufacturing. The value of American manufactured products rose from $1.8 billion in 1859 to over $13 billion in 1899. Some modern economists estimate that the gross national product, or GNP, increased by 44 percent between 1874 and 1883 alone, and continued to expand. Between 1850 and 1900, the geographical center for manufacturing moved westward from Harrisburg, Pennsylvania to Columbus, Indiana. By 1890, Illinois, Indiana and Ohio turned out more than 30 percent of all American manufactured goods. Indiana was an increasingly urban state, with a population density in 1880 of 55 people per square mile, though the majority of Hoosiers still lived in rural areas.
Economists and historians cite a number of conditions which contributed to the rapid industrial transformation in this country. Natural resources, previously untapped, were exploited and, with advances in production technique, developed into new products. The growth of the country, both in settled areas and population, added to the size of the national market while protective tariffs shielded this market from foreign industrial competition. Foreign capital to finance this expansion entered the market freely, while European immigration, 2.5 million in the 1870s and twice that in the 1880s, provided the labor needed by industry.
This period also saw rapid advances in basic science and technology which created a wealth of new machinery, new processes and new power sources that increased productivity in existing industry and created new industries as well. This practical application of technology and science, so indicative of the age, affected some aspect of daily life in every community. In 1880, 13,000 patents were issued and that number continued to grow, totaling 218,000 by the end of the century. Only a few examples of the new inventions and patented devices were dynamite, oleomargarine, phonographs, cash registers and typewriters.
On a national scale, though, the emergence of large-scale heavy industry - mining, iron and steel production, and the exploitation and refining of crude oil - marked this period of history. These industries, coupled with the expansion of the railroad system, defined’ big business,’ created enormous wealth, gave rise to the labor movement, and influenced future government decisions regarding regulation.
Railroads and big business
Most historians credit the phenomenal growth of the railroad system as the common linking factor in the country’s economic expansion after the Civil War. Transcontinental and feeder lines brought raw materials to industrial centers, agricultural goods and finished products to cities across the country and to coastal ports for overseas shipment. They carried settlers into territories west of the Mississippi River, passengers through to California and the West Coast, and mail to every corner of the nation and its territories.
Railroads’ impact on the economy was threefold, according to historian Albro Martin. First, the railroad industry reduced the real cost of transportation to a fraction of what it had been. Secondly, it brought all sections of the country into the national economy, making regional specialization on a grand scale possible. Finally, it gave birth to a host of other industries for which it became an indispensable input or from which it derived the huge quantities of materials and equipment called for by railroad investment.
Competition for business was steep, however, and cut heavily into company profits, leading one commentator to note that a person trying to run a railroad honestly would be like Don Quixote tilting at a windmill. (Charles F. Adams, Jr., in Railroads: The Origin and Problems, 1879). The cost of shipping 100 pounds of goods from Chicago to New York fell from a high of $2.15 in 1865 to between 35 cents and 75 cents in 1888. To stay in business, ruthless railroad owners charges higher rates on the short feeder lines where there was little or no competition, circumvented ‘official’ rate schedules by giving rebates to large customers and otherwise undercut smaller competitors until they failed. By 1879, 65 lines were bankrupt. During the 1880s, the surviving major rail lines responded by building or buying lines in order to create interregional systems. In that decade, more than 70,000 miles of line were built, with 164,000 miles in operation by 1890, most of it in the trans-Mississippi West. (See: Transportation)
In the face of competition and falling rates, the railroads developed a method, called ‘pooling,’ that established standard rates. Pooling was an agreement among rail companies operating in the same market to set and maintain rates at a certain level. Revenues were then ‘pooled’ and distributed among the lines. For example, in 1877, officials of the New York Central, Erie, Pennsylvania and the Baltimore and Ohio made a rate agreement for freight from New York to Chicago and St. Louis, putting the proceeds in a common pool to be divided between the lines on a 33, 33, 25 and 9 per cent basis respectively. At the same time, they also agreed to reduce the wages of their workforce by 10 per cent, leading, some say, to the great railroad strike of 1877. Pools became a ‘hot topic’ politically, leading ultimately to the Interstate Commerce Act of 1887, which outlawed the practice.
One of the most vociferous groups to speak out against the railroads were farmers. Already besieged by falling prices for their goods, they also faced higher shipping costs on the smaller feeder lines that faced little or no competition and thus could demand higher rates. It was often more costly to ship items short distances that it was to ship them between regions. The Granger movement of the 1870s and the Agrarian movements in the South and Great Plains during the same decade and into the 1880s adopted the railroad rate problem as one of their platforms. ( See: Politics)
Heavy Industry and Big Business
With railroads as the catalyst and advances in technology as the tools, heavy industry made huge strides in production, capitalization and consolidation. The steel industry and the oil industry are two major examples dominating the world of big business during the period.
Iron and Steel
The demands of a geographically expanding nation coupled with the need to build industrial infrastructure made the iron and steel industry of paramount importance in the last quarter of the nineteenth century. Competition was steep, though, and led ultimately to consolidation and integration among the competing companies. Technological advances, namely the Bessemer and open hearth processes for manufacturing steel, greatly improved efficiency and productivity. Huge fields of iron ore discovered in Minnesota and Michigan, and mined commercially in the 1870s and 1880s, fed the growing number of furnaces, just as the combination of rail and Great Lakes shipping made connecting raw materials to manufacturing centers easier. This conjoining of events and advancements gave the industry the critical mass necessary to become a major economic factor.
In 1880, there were about 792 iron and steel manufactures in the country. Making farm implements alone required approximately $62,000,000 in capital. Although Pittsburgh was considered the center of steel production, there were also multimillion dollar facilities in Illinois, Alabama and Colorado. These producers of iron and steel made the raw material for other industries making metal products large and small, from locomotives to agricultural and industrial machinery to knives. In 1885, the nation produced just under 5 million tons of pig iron and about 6.5 million tons of steel. In 1886, the corresponding numbers were approximately 6.5 million tons of pig iron and about 9 million tons of steel.
Andrew Carnegie was considered by most to be the kingpin of the steel industry. His rise from poor, immigrant bobbin boy in a textile mill to multimillion dollar industrialist was the stuff of legend. When others expanded their operations in good times, he instead chose to expand in lean time at less cost. He practices what became known as ‘horizontal integration,’ which meant he bought up his competitors ( again, in lean times) to control the market in one product.
Oil and Energy
With the discovery of oil in northwest Pennsylvania in 1859, the United States entered a new era of energy, one in which it is still entwined. This emerging industry had one advantage held by no other at the time - the US was the only source of commercially available crude oil in the world and consequently faced no foreign competition for its products. Speculation ran wild and the hills of Pennsylvania took on the frenzy of the California gold rush of a decade before. Production rose dramatically, from 10 million barrels a year in 1873 to 20 million barrels in 1880.
Before the invention of the gasoline engine, kerosene was the most important product. In the early years in Pennsylvania, hundreds of small refineries, reminiscent of the stills of neighboring moonshiners, produced kerosene under dangerously explosive conditions. By the 1870s, refining had been improved and the volume of crude being pumped caused prices to fall. Refineries became larger and more efficient. By the 1870s, the chief oil-refining centers were Cleveland, Pittsburgh, Baltimore and the New York City area. Of these, Cleveland was the fastest growing, due to advantages in rail and water transport.
Rockefeller and the Trust
The Standard Oil Company of Cleveland, led by John D. Rockefeller, emerged as the largest oil refining business in the country. By 1879, Rockefeller controlled 90 per cent of the nation’s oil refining capacity, plus a network of oil pipelines and reserves of petroleum still in the ground. Because of this monopoly and the expansion of the industry in general, the company attracted much attention in the late 19th century. By means fair and foul, Rockefeller and his associates cornered the market, driving prices down to destroy competitors, demanding and getting rebates from railroad companies for shipping, and employing spies and bribery to attract customers from competing companies.
As Rockefeller began to buy up companies in other states, and this brought him into conflict with Ohio law, which prohibited owning plants in other states or holding stock in out-of-state corporations. To circumvent this restriction, Rockefeller, with the help of lawyer Samuel C.T. Dodd, developed the "trust," in which the stock of Standard Oil of Ohio and all the other companies Rockefeller purchased was vested and placed under the control of nine trustees. The result was that competition nearly disappeared, and by 1892, Rockefeller was worth $800 million. The Trust’s complete control of the industry was soon synonymous with monopoly in the eyes of the public. Trusts became a vehicle in other industries, too. At one point the Sugar Trust, run by the American Sugar Refining Company, controlled 98% of sugar refining in the United States. In response, the government passed the Sherman Anti-Trust Act in 1890, which had mixed success in controlling these monopolies.
Labor’s Response to Big Business
The demandfor labor brought on by increased industrialization brought increasing numbers of rural migrants, women, children and newly arrived immigrants into the workforce after the Civil War. In the 1880s, more than 5 million immigrants arrived in this county, the majority of them from England, Ireland and Germany. 1882 marked a new high, with 788, 992 arrivals - more than 2, 100 per day. Additionally, falling agricultural prices during that decade caused many young men from farming communities to move to cities or migrate west. Those who found work in factories, mines or railroads had to bend to the demands of a new schedule, tied to machine and time clock. For some, this proved too onerous, and they turned to organized resistance.
Many historians note the railroad strike of 1877 as a watershed in the late nineteenth- century labor movement. In the economic downturn that followed the Panic of 1873, railroad managers cut wages, increased workloads and laid off workers, particularly those who belonged to unions. In July a series of strikes broke out among unionized workers who were protesting wage cuts. Violence spread from Pennsylvania into the Midwest. At one point, nearly two thirds of the railway mileage in the country was shut down. Private police - the Pinkertons - and state militia were called in by company owners to control the strikers. The courts issues injunctions against the strikers, citing conspiracy to obstruct the U.S. mail in some cases. In August that year, a judge in Indianapolis gave railroad strikers who had violated his injunction short jail sentence for contempt of court. After a month of unprecedented carnage, President Hayes sent in federal troops to end the strikes.
Craft unions dated from the early nineteenth century, but their narrow focus kept them from broad support and power. The National Labor Union, founded in 1866, failed to survive the hard times of the 1870s. Only the Knights of Labor, a broad based labor organization founded in 1860, survived the depression of 1873. The Knights, at first associated with garment cutters, opened its membership to other workers in the 1870s. Knights membership peaked in 1886, at 730,000. Unlike the narrowly focused trade unions at the time, which excluded everyone except workers in particular crafts, the Knights welcomed women, African-Americans, immigrants and all unskilled and semi-skilled workers. The Knights believed they could eliminate conflict by establishing a cooperative society in which laborers worked for themselves. "There is no good reason, " stated Grand Master Terence V. Powderly, "why labor cannot, through cooperation, own and operate mines, factories, and railroads." Most Knights leaders opposed strikes, but the failure of negotiations with Jay Gould in 1886 during a dispute over wages and union recognition for railroads in the Southwest caused militant crafts unions to break away. Membership in the Knights dwindled.
Once economic conditions improved in the early 1880s, labor groups began to campaign for an eight-hour work day. This renewed effort by laborers to regain control of their work gathered steam in Chicago, where radical anarchists - who believed that voluntary cooperation should replace government - and trade unionists promoted the cause. On May 1, 1886, mass strikes and the largest spontaneous labor demonstration in the nation’s history took place, with about 100,000 workers participating. Police mobilized, especially around the large McCormick reaper factory. The day passed calmly, but two days later, police stormed an area near the McCormick factory where striking union members and nonunion strikebreakers were battling. Police shot and killed two unionists and wounded a few others. Labor groups rallied the next evening at Haymarket Square, near downtown Chicago, to protest police brutality. As police approached the rally, a bomb was thrown and exploded near them, killing seven and injuring sixty-seven. Mass arrests followed, including eight anarchists, who were tried and convicted of the bombing, even though the evidence against them was questionable. Four were executed, one committed suicide in prison and the remaining three were pardoned in 1893. The identity of the bomber was never clearly established.
Out of this upheaval, the American Federation of Labor emerged as the preeminent worker’s organization. The AFL originated in a movement which arose in Terre Haute, Indiana and Pittsburgh in 1881, when Samuel Gompers, Adolph Strasser and Peter J. McGuire formed the Federation of Organized Trades and Labor Unions of the United States and Canada. Samuel L. Leffingwell of the Indianapolis Trades Assembly became the second president of the organization in 1882. By 1886, the organization had transformed into the AFL, which at that time had a membership of about 140,000. Gompers became the president, and remained in power well into the twentieth century.
The AFL avoided the idealistic rhetoric of the Knights of Labor and instead promoted concrete goals - higher wages, shorter hours, and collective bargaining rights. As a federation, member unions retained autonomy in their own areas of interest. However, since unions were organized by craft rather than by workplace, they had little interest in recruiting unskilled workers into membership. And, unlike the Knights, the AFL was openly hostile to women. Many unions affiliated with the AFL also had exclusionary policies when it came to immigrants and blacks. These prejudices were reinforced when blacks and immigrants worked as strikebreakers, who may have found the lure of employment too great to resist even as they faced militant strikers.
The business of agriculture changed in fundamental ways after the Civil War. Never had there been a greater expansion than between 1870 and 1900. Hundreds of thousands of acres of land came under cultivation west of the Mississippi, mostly in grain, causing other areas of the country to switch to and specialize in other farm products. The extensive network of rail lines facilitated moving products to markets, although not without significant cost. Mechanized farm equipment improved efficiency. Scientific methods were introduced through agencies like the Department of Agriculture, founded in 1862, and state experiment stations, established by the Hatch Act in 1887. Paradoxically, this expansion was accompanied by a steady price decline beginning with the Depression of 1873. This decline was fueled in part by overproduction and competition from other countries, thrusting American farmers into a global economy.
Farmers faced a number of economic issues between 1870 and 1897. The first was a steady downward trend in the price received for output coupled with chronic overproduction. Price declines were, in part, due to falling costs of production as new areas around the world came into production and as more progressive farmers continued the trend toward mechanization that began before the Civil War. And, in order to compensate for falling prices, farmers increased production as much as possible, hoping to make up in volume what was lost in falling prices.
Although prices declined in nearly all areas of the economy during the same period, the farmer faced other obstacles that reinforced the belief that other classes, especially the rising industrial and financial classes, were receiving a better deal from political institutions than they were. This feeling was more widespread in the newly opened areas of the plains, where the difficulties of initial cultivation and problems of isolated farm life fueled discontent. During the post-Civil War era, farm productivity did not grow as much as non-farm productivity. Therefore, when relative prices were approximately unchanged, the average farmer’s income grew at a slower rate than that of the average non-farmer. Farmers in the older and more favorably situated areas did, however, provide a counterbalance to the agrarian radicalism of the west and south.
Farmers in general also faced a the shortage of both short and long term credit. Because nationally chartered banks had been forbidden to engage in farm mortgage financing, farmers had to resort to state banks and private mortgage bankers who only lent on short term (five to seven years) at high rates of interest (8, 10 or 12 percent before 1887, 18 to 24 percent afterwards, in some cases), on mortgages that always seemed to come due when crops were bringing less than ever. Farmers borrowed money for start-up costs, to buy more land and to purchase livestock and equipment.
Railroad rates were another concern. Railroad companies, faced with stiff competition on trunk lines, often raised rates on the feeder lines where there competition was less - or non-existent - to compensate. Much of the concern over rates came from the newly opened areas on the Great Plains. A survey of rail rates in 1886 reveals that it cost between 54 cents and 76 cents per ton mile to ship goods east of Chicago, but as much as $2.04 per ton mile west of the Missouri River. At one point, it was cheaper for farmers on the plains to burn corn for heat than to ship it east to market.
Finally, unlike other producers, farmers were not protected by tariff legislation. Their products competed in the world market without the protection of import tariffs. (See: Politics of the 1870s and 1880s) And, in 1879, Italy declared U.S. pork "unwholesome," and banned its import. Portugal, Spain, France, Germany and Austria-Hungary soon followed. Exports of wheat, rye and their flours suffered even more after 1880. Meanwhile, manufactured goods were protected by U.S. tariff laws, thus keeping the price paid for those goods artificially high. Consequently, farmers were forced to sell their goods in a purely competitive world market, while buying in a protected one.
During this period, the percentage of Americans who worked in farming was declining, even though the total number of persons engaged in farming was increasing. In 1870, 6,850,000 persons were farming. In 1880, that number was 8,585,000 and in 1900, 10,912,000. Corresponding U.S. population for those years was (1870) 39,905,000, (1880) 50,262,000, (1900) 76,094,000. By the 1880s, the center of outward population migration was the Old Northwest, which had a loss of 1,087,000 out of 1,363,000 nationwide. Nearly all the moving population came from Midwestern farms and settled in mining camps and towns rather than on the land.
Statistics also reveal that the gross product per farm worker increased within the same period. Calculated in 1910-1914 dollars, the value increased from $362 in 1870, to $439 in 1880 and $526 in 1900.
Once the wheat belt moved to the central plains, the Old Northwest became the corn and hog belt, although wheat continued to be a major crop. Of the twelve highest producing states for wheat in 1880, Indiana was second. Also in 1880, Indianapolis was second only to Chicago in the number of hogs handled at packing houses. Around the same time, 4/5 of the corn produced in the U.S. came from ten states; Indiana was fourth on the list.
It is difficult to categorize the economic state of farmers in general during this period. The consensus of historians seems to indicate that the more radical strain of farmers came from the South and newly opened Great Plains. Farmers in both these regions faced enormous start-up costs, the South recovering from the war and Reconstruction and the Plains just opening up to cultivation. In other areas of the country, falling incomes and return on investment coupled with the lure of the west and new opportunities in urban areas drew labor from the farm to these new opportunities. It took until the early twentieth century for supply and demand to level out, as farm population stabilized and new waves of immigration increased the food needs of the country. It is evident, however, that the last quarter of the nineteenth century was a major transition period for agriculture, just as it was for other sectors of the economy.
Some general statistics for Indiana, taken from the 1880 U.S. Census:
329,614 males were employed in farming; 118,221 as laborers, 209,297 as farmers and planters. Of these 186,894 were between the ages of 16 and 59, 22, 403 were over 60 years old. Of the total, 186,894 were native born. The next largest group was German born, totaling 13, 462 One thousand six hundred twenty-six women were employed in agriculture out of 51,422 women employed in all sectors statewide. Of those employed in agriculture, 526 were laborers and 982 were farmers and planters.
The number of farms in Indiana in 1870 was 161,289; that number was 194,013 in 1880, a 20% increase. Slightly over 76% of the farms were owner operated. Acres in farming in 1880 totaled 20,420,983 or 88.9% of the total land, second only to Ohio, in a state that had an average of 55 people per square mile.
In 1880, Indiana produced 115,482,300 bushels of corn, 47,284,853 bushels of wheat and 15,599,518 bushels of oats. The comparable figures for the entire U.S. were 1,754,591,675 in corn, 459,483,137 in wheat and 407,858,999 in oats. The export prices per bushel in 1870 were 92.5 cents for corn, $1.298 for wheat and 63 cents for oats. Nine years later those prices were 47.1 cents for corn, $1.068 for wheat and 29.7 cents for oats, reflecting a relatively steep decline.
Mass Production and the Consumer Economy
A number of factors worked to bring rural residents into a world of consumer culture that emerged after the Civil War. As railroads spread, so did the availability of goods now massed produced in urban factories. Refrigerated rail cars, first patented in 1868, now brought heretofore unavailable produce, like oranges, to remote corners of the country. Purchases that once occurred by bargaining with the local storekeeper were now transacted with distant purveyors through mail order catalogs. Department stores, mail order catalogs and the new the 5 and 10 cent store were places of ‘awakened desire,’ as one historian put it. While the large retail emporiums and the 5 and 10 cent stores quenched the desires of city residents, rural families relied on the Montgomery Ward and Sears catalogs to keep them abreast on the latest conveniences, machinery, and fashion styles.
The Montgomery Ward catalog first arrived on the scene in 1872. That year, from a small rented loft in Chicago, Aaron Montgomery Ward sent out a single price sheet listing items for sale and explaining how to place an order. Twelve years later, the catalog numbered 240 pages and listed nearly ten thousand items for sale. Unlike a face-to-face purchase from the local storekeeper, the mail order business depended on the confidence of a buyer in a seller he or she had never seen. Ward built his business on his hope for a revolution in farmers’ buying habits. At first, Ward’s had the advantage of being the official supply house of the Grange. From 1872 through the 1880s, Ward’s described itself as " The Original Grange Supply House" and offered Grangers special privileges. Ward’s products also carried an ironclad guarantee - all goods were sent "subject to examination," and any item found to be unsatisfactory could be returned to the company, which paid the postage both ways. Ward’s apparently succeeded in personalizing these otherwise remote transactions, for correspondence soon included hundreds of men writing annually seeking a wife and letters from a few women looking for husbands.
Sears, Roebuck and Company appeared on the scene a bit later. In 1886, Richard Warren Sears set up the R.W. Sears Watch Company in Minneapolis, leaving his railroad station agent’s job after making about $5,000 by selling watches "on the side." A year later, he moved to Chicago and took a partner, Alvah Curtis Roebuck, a watchmaker. He sold this watch business in 1889 for $70,000,but was back in the retail business in a couple of years. By 1893, the firm of Sears, Roebuck and Company had expanded into a wide range of merchandise.
Some of the factors that precipitated the availability of mass goods arose from the Civil War. Standardized clothing sizes, developed first to clothe soldiers, transferred to the civilian population after the war. By the 1880s, retailers advertised " every size clothes for every sized man." Standard sizes for women took a bit longer to develop and become popular. The standardization and mass production of shoes followed a similar path.
Barter was still sometimes used as a means of transaction in rural areas. But, increasingly, the emergence of mail order catalogs and urban department stores, both with fixed prices, changed the way in which people acquired and paid for goods. The wide availability of relatively affordable consumer goods also changed how individuals and families defined "necessities" as advertising, a new medium, enticed folks to indulge in a variety of ‘new and improved’ commodities. The fixed price policy democratized the marketplace, in which items were judged not by their quality or function but by their price.
General storekeepers and other small merchants protested against the large retailers and their fixed prices. They cited unfair competition and the impersonal nature of ‘trading’ with the far-away retailers. In rural towns across the country, local residents still relied on the barter system and a bit of negotiating, but, increasingly, consumerism became a national phenomenon, encouraged by an expanding transportation system, relentless advertising and the growing availability of mass produced products.
The expansion and contraction of the money supply was of great concern to the farmers in the late 19th century. Prices were falling and interest rates rising, trapping farmers (and others) between the two. Some historians note, however, that even though farm prices were falling, so were other prices, so the total economic picture may not have been as bad as some say. Even so, farmers felt the pinch, especially in the newly opened areas west of the Mississippi, where start-up cost drove many into high interest, short term debt. Farmers in other regions faces similar challenges to raise production or increase specialization, and often took on debt to purchase new mechanized farm equipment or additional land. Finally, with the tight money supply, it was never certain whether hard currency would be available when it came time to sell crops after harvest.
One of the biggest issues surround the increase in the money supply was that of silver currency. In 1873, the government had dropped the provision for minting silver dollars in legislation governing the mint. This action attracted little attention at the time, since greenbacks and national bank notes were the only forms of currency in circulation. But as the money supply tightened, there was agitation to re-mint silver dollars. The government responded with the Bland-Allison Act of 1878, providing for the purchase of silver by the treasury in a specified amount and for its coinage into silver dollars. Provision was also made for the issuance of silver certificates in denominations of $10 and up. (In 1877, the Department of the Treasury’s Bureau of Engraving and Printing started printing all U.S. currency.) Experience proved that it was impossible to keep silver dollars in circulation and, by 1886 it became necessary to reduce the denomination of silver certificates to one dollar. It was in this form that most of the silver purchased went into circulation. Consequently, money in circulation around 1886 consisted of greenbacks, national bank notes and silver certificates - with an occasional silver dollar turning up.
On late nineteenth century economy in general, including discussions of agriculture see:
Barnes, James A. Wealth of the American People. New York: Prentice Hall, 1949.
Bogart, Ernest L. The Economic History of the United States. New York: Longmans, Green and Company, 1917.
Bruchey, Stuart. Enterprise: The Dynamic Economy of a Free People. Cambridge, Massachusetts: Harvard University Press, 1990.
Chandler, Arthur. The Changing Economic Order: Readings in American Business and Economic History. New York: Harcourt, Brace and World, Inc. 1968.
Degler, Carl N. The Age of Economic Revolution, 1876-1900. Glenview, Illinois: Scott, Foresman and Company, 1977.
Garraty, John A. The American Nation: A History of the United States Since 1865. New York: Harper and Row, 1983.
Greenleaf, William. American Economic Development Since 1860. Columbia: University of South Carolina Press, 1968.
Gunderson, Gerald. A New Economic History of America. New York: McGraw Hill, 1976.
Higgs, Robert. The Transformation of the American Economy, 1865-1914: An Essay in Interpretation. New York: John Wiley & Sons, 1971.
Hoftstader, Richard and Beatrice Hofstader. Great Issues in American History: From Reconstruction to the Present Day, 1864-1981. New York: Vintage Books, 1982. (See Part III Agrarian Reform, No. 1 Resolution of the Meeting of the Illinois State Farmer’s Association, April 1873.)
Licht, Walter. Industrializing America: the Nineteenth Century. Baltimore: Johns Hopkins Press, 1995.
Martin, Albro. "Economy from Reconstruction to 1914." in Porter, Glenn, ed. Encyclopedia of American Economic History. New York: Charles Scribner Sons, 1980.
Shannon, Fred. The Centennial Years: A Political and Economic History of America from the Late 1873s to the Early 1890s. Garden City, New York: Doubleday & Company, 1967.
Shields, Roger Elwood. Economic Growth with Price Deflation, 1873-1896. Dissertations in American Economic History, University of Virginia, August 1969.
For discussions of consumerism, see:
Boorstin, Daniel. The Americans: The Democratic Experience. New York: Random House, 1973.
Schlereth, Thomas J. Victorian America: Transformations in Everyday Life 1876 1915. New York: Harper-Collins, 1991.
Dirks, Scott. The Value of A Dollar: Prices and Incomes in the United States, 1860 1989. Lakeville, Connecticut: Grey Publishing House, 1999.
On the roots of the silver issue, see:
Weinstein, Allen. Prelude to Populism: Origins of the Silver Issue, 1867-1878. New Haven: Yale University Press, 1970.
For a focus on farmers and agriculture, see:
Danhof, Clarence H. " Agriculture in the North and West." in Porter, Glenn, ed. Encyclopedia of American Economic History. New York: Charles Scribner Sons, 1980.
Shannon, Fred A. The Farmer’s Last Frontier: Agriculture 1860-1897. Farrar & Rinehart, Inc., 1945. | http://www.connerprairie.org/Learn-And-Do/Indiana-History/America-1860-1900/1880s-Economy.aspx | 13 |
14 | February 16, 2012
Grade 11 Food Studies
Here are the links you need in order to complete the worksheet that was given in
class on February 10, 2012:
**** Ms. Bergen’s students in the Grade 11 Food Studies section
need to hand in this sheet completed for marks.
Grade 10 Food Studies
Some helpful links for this assignment:
We are busy, busy people. Compared to 50 years ago, life can be
very fast paced The food industry is keeping up! Fast Food Restaurants are everywhere, making it a convenient option for breakfast, lunch, dinner and even for a snack.
Think about your eating habits, how good are they?
This is what you are going to do…
1. Complete the top two sections of the worksheet labeled, “You Are What You Eat.”
Read this below and complete the next section of your assignment:
Nutrition and the Health of Young People
Benefits of Healthy Eating
Proper nutrition promotes the optimal growth and development of children.1
- Healthy eating helps prevent high cholesterol and high blood pressure and helps reduce the risk of developing chronic diseases such as cardiovascular disease, cancer, and diabetes.1
- Healthy eating helps reduce one’s risk for developing obesity, osteoporosis, iron deficiency, and dental caries (cavities).1,2
Consequences of a Poor Diet
- A poor diet can lead to energy imbalance (e.g., eating more calories than one expends through physical activity) and can increase one’s risk for overweight and obesity.1,8
- A poor diet can increase the risk for lung, esophageal, stomach, colorectal, and prostate cancers.9
- Individuals who eat fast food one or more times per week are at increased risk for weight gain, overweight, and obesity.1
- Drinking sugar-sweetened beverages can result in weight gain, overweight, and obesity.1
- Providing access to drinking water gives students a healthy alternative to sugar-sweetened beverages.
- Hunger and food insecurity (i.e., reduced food intake and disrupted eating patterns because a household lacks money and other resources for food) might increase the risk for lower dietary quality and undernutrition. In turn, undernutrition can negatively affect overall health, cognitive development, and school performance.10-12
Eating Behaviors of Young People
- Most U.S. youth
- Do not meet the recommendations for eating 2½ cups to 6½ cups* of fruits and vegetables each day
- Do not eat the minimum recommended amounts of whole grains (2–3 ounces* each day)
- Eat more than the recommended maximum daily intake of sodium (1,500–2,300 mg* each day) .1,3,7
- Empty calories from added sugars and solid fats contribute to 40% of daily calories for children and adolescents aged 2–18 years, affecting the overall quality of their diets. Approximately half of these empty calories come from six sources: soda, fruit drinks, dairy desserts, grain desserts, pizza, and whole milk.5
- Adolescents drink more full-calorie soda per day than milk. Males aged 12–19 years drink an average of 22 ounces of full-calorie soda per day, more than twice their intake of fluid milk (10 ounces), and females drink an average of 14 ounces of full-calorie soda and only 6 ounces of fluid milk.6
Diet and Academic Performance
- Eating a healthy breakfast is associated with improved cognitive function (especially memory), reduced absenteeism, and improved mood.13-1
information courtesy of: http://www.cdc.gov/healthyyouth/nutrition/facts.htm
-use http://www.hc-sc.gc.ca/fn-an/food-guide-aliment/basics-base/quantit-eng.php to answer the questions.
Use the 2 links below to find 3 fast food restaurants that you enjoy. For each restaurant you will compose 3 different meals. Write the total calories next to each food you choose and then add up the whole meal at the end.
(scroll to bottom): http://www.chowbaby.com/fastfood/fast_food_calories.asp
Meal 1~ _Chicken Club Wrap (680), Medium Curly Fries (210), Small Ice Tea (120) Total Calories_1010_
5. In Microsoft Word, begin to type your conclusion. Think about these questions as you write:
1. Was the information you found informative?
2. Do you have healthy eating habits?
3. Was there any information that surprised you?
4. Will you be more conscious of what you eat at fast food restaurants?
5. Will you change the way you eat based on the information you found?
Please print out your conclusion. Remember to put your name on it!
- 1/2 pound dry fettuccine pasta
- 3-4 Tbsp butter
- 2/3 cup finely grated parmesan cheese
- Black pepper
- 1/2 cup cream
1 Bring a large pot of salty water to a boil and drop in your fettuccine.
2 Melt the butter in a large sauté pan set over low heat. Add the cream to the butter as it melts. Stir often to combine the two, do not turn off the heat, but keep the heat at its lowest setting while the pasta cooks.
3 When the fettuccine is al dente (cooked, but still a little firm) lift it out of the pot with tongs and move the pasta to the sauté pan. Do not drain the pasta. You want it dripping wet with the cooking water. Turn on the heat under the sauté pan to medium and swirl the pasta and butter together to combine. Add half the cheese, then swirl and toss the pasta until it has incorporated into the sauce. If needed, add a few spoonfuls more of the pasta cooking water. Add the rest of the cheese and repeat.
4 Serve at once with either a little black pepper (for classic version) or nutmeg (for creamy version) ground over the pasta.
Yield: Serves 4.
- 4 potatoes, (unpeeled, washed)
- about 1/4 cup onion
- 1/2 tsp. salt
- about 2 tbsp. flour
- butter or oil
- Cut up potatoes (about 4 cups) and put in blender.
- Add onion, salt, and flour.
- Blend, stopping frequently to scrape down sides.
- Heat enough butter, oil, or mixture of butter and oil to coat bottom of fry pan. Keep on high heat.
- Drop spoonfuls of batter into pan, pressing down lightly with back of spoon to flatten pancake.
- Fry until brown (about 3-4 minutes), flip over, and continue frying until cooked through and crispy brown.(about 3-4 minutes).
- Remove and keep warm in oven. Continue frying till all batter is used, adding more butter (or oil) as needed.
- Serve with applesauce, sprinkle with sugar, or serve plain.
For today, please take notes on the topic of Eating Personalities:
You get 25 marks for copying the notes. After you have written the notes, answer the following questions (6 marks):
1. What “Eating Personality” are you?
2. Do you constantly think about food or do you sometimes forget to eat?
3. Do you know anyone with an eating disorder (anorexia or bulemia)?
Worth: 31 marks, due at the end of class today.
To be or not to be?
Worth: 37 marks
Due: November 4th, end of class.
What is a “Vegetarian Diet”?
The types of vegetarian diets are many and varied. The common labels are:
- lactovegetarian – includes dairy products
- ovovegetarian – includes eggs
- pescovegetarian – includes seafood
- vegan – excludes all animal products including honey
Why choose a vegetarian diet?
Health and Well-being
- Vegetarian diets are linked with lower incidences of many diseases including heart disease, cancer, diabetes, gallstones, kidney disease, gastrointestinal disease, and rheumatoid arthritis.
- The health benefits are associated with diets that are low in animal protein, saturated fat, cholesterol, and high in whole grains, fiber, fruits, vegetables, plant protein, phytochemicals, and antioxidants.
Animal Welfare and Non-violence
- People interested in protecting the welfare of animals often choose vegetarian diets. In addition to a vegetarian or vegan diet, they may also choose not to wear leather, silk, or wool or use other products that utilize animal ingredients or testing on animals.
- Concern for the environment includes concern about the use of land for grazing and raising livestock, consumption of water, disposal of animal waste and animal carcass by products, pollution, and preserving natural ecosystems.
- Some religions including Seventh Day Adventist, Buddhism, and Hinduism advocate not eating meat.
Choose a topic for your proposed legislation. Once topic is chosen, develop proposed legislation and rationale. Complete the requirements listed below in the “Process” section.
Sample topics for proposed legislation:
¢ Require inclusion of soy protein in school lunch programs
¢ Modify the Canada Food Guide and Dietary Recommendations for Canadians to promote vegetarianism
¢ Reduce or eliminate federal aid to the meat and poultry industry, increase aid to growers of plant foods (such as soybean farmers)
Sample ideas for rationale:
¢ Promote better health for Canadians
¢ Protect the environment
¢ Protect animal rights
Sample proposed legislation with rationale:
We propose that congress require the inclusion of soy-based meat alternatives in the school lunch program at least twice a week in order to teach children about healthy alternatives to meat and promote better health among children.
1. Visit at least 6 of the websites listed below. List the 6 websites and for each website you visited write a brief summary (at least 3-4 sentences in your own words- not pasted from a website) describing the information you found on the site that is helpful to this project. (6 websites x 4 marks = 24 marks)
2. Did you visit any websites not listed on this webquest? If so, list the name of the website(s) and url(s) of the sites you thought were interesting and helpful to this project.
3. Based on the information you explored, state your proposed legislation and rationale. List at least 5 points that you intend to use to support your proposed legislation and rationale.
4. List at least 2 sources of opposition you think you might encounter and why. How will you refute this opposition? (8 marks)
1. The Position of the American Dietetic Association: Vegetarian Diets
The position of the ADA provides additional information and guidelines for planning a vegetarian diet as well as the association’s “official” opinion on the matter. ADA has developed a food guide pyramid for vegetarians diets.
2. Vegetarian Nutrition Dietetic Practice Group
This website contains some very useful information including articles about vegetarianism and links to other resources. Most information is available to everyone, regardless of whether you are a member of the practice group.
3. Making the Change to a Vegetarian Diet
This is a fact sheet published by the Vegetarian Nutrition Dietetic Practice Group which serves as a brief list of suggestions to assist in making the change to a meatless diet.
4. School Lunch Program Requirements
* Scroll down to “Commodity Foods”
Close to 8 million breakfasts and 27 million lunches are provided each day by the U.S. Department of Agriculture’s (USDA’s) National School Lunch Program (NSLP) and School Breakfast Program. For many years the USDA maintained specifications for textured vegetable protein products allowing their use in school lunches to replace up to 30% of meat in various menu items. Soy milk is required to be available only for children with documented medical conditions restricting them from drinking cow’s milk. Why not make it available for everyone, even if students paid more for it? Without expounding on the commercialism and politics behind this decision, I do want to comment that I think it is unfortunate that more soy products are not permitted to be used in school lunches. Soy is a very healthful food and it would serve children well to become accustomed to eating healthfully.
5. Vegetarian Resource Group
The Vegetarian Resource Group (VRG) is a non-profit organization dedicated to educating the public on vegetarianism and the interrelated issues of health, nutrition, ecology, ethics, and world hunger. Their site specifically includes lots of useful information for kids and teens who are vegetarian including information about scholarships and essay contests for teen vegetarians.
6. Vegan Outreach- Vegan Starter Pack
For those considering a vegan diet, this starter pack can be very useful. The focus is on veganism, and therefore it discusses many aspects of being vegan, not only diet. It provides good food for thought!
7. Vegetarian Network Victoria
The Vegetarian Network Victoria website includes a variety of interesting information about reasons to become vegetarian and helpful suggestions for doing so.
8. Go Veg.com
This website contains information about all the reasons for becoming vegetarian but especially focuses on animal welfare. These links contain information about the environment, animal welfare, factory farming, world hunger, and health issues.
9. EarthSave International
Aside from concern for your own health or the well-being of animals, another important reason to consider adopting a vegan diet is to protect our planet. The use of water, land, and animal feed to produce meat and dairy products is taking a huge toll on our natural resources. So much so that we risk eventually running out of water in the western U.S. The devastation of rain forest to raise beef for fast food restaurants has been widely publicized for years but yet doesn’t seem to hamper the appetite of our culture for a fast food burger. This is an area where environmental scientists and nutritionists would do well to work together and promote healthy diets to promote a healthy planet. This is the only home we have and we need to take care to ensure our own sustainability on the planet.
The Union of Concerned Scientists says there are two things people can do to most help the environment. The first is to drive a fuel-efficient automobile (that means, not an SUV or a truck) and live near where we work. The second is to not eat beef. Not only does raising beef use 2,500-5,000 gallons of water per pound of beef, it also creates huge amounts of waste products that must be dealt with as well as damage the grazing areas of the southwest which are slowly being turned into dessert by the hooves of cattle.
10. Soy For You Webquest
This is a webquest specifically about the benefits of eating soy foods.
11. People for the Ethical Treatment of Animals
PETA is an activist group which is quite controversial given some of their actions. Nonetheless, the website contains useful information about the treatment of animals.
12. The Environmental Impacts of Factory Farming in Michigan
This is a documentary about the impacts of industrial farming on ecosystems, our air, soil, water, human health, and Animal Wellbeing. Provided by The Michigan Chapter of The Sierra Club, this video focuses on the impacts in Michigan.
13. Meet Your Meat
This is a short video narrated by Alec Baldwin that discloses industry standard practices used to raise animals for meat. Warning: This is graphic so if you might be overwhelmed by footage of animals suffering, you should not watch this.
14. The Food Revolution
This book by John Robbins (son of the founder of the famed Baskin-Robbins) shares his view about how Americans “can enhance their health, express compassion, and help create a thriving just and sustainable planet.”
15. Dietary Choices that Impact the Environment
This powerpoint presentation is based on a paper I wrote for an Earth Science class. It contains current information about the environmental impact of raising meat and a list of additional resources.
Just for Fun – List of Vegetarian Celebrities
This website may not be counted as one of your 6 sites to visit !!!!
For today’s class, please refer to your Fats Notetaker:
1. lease calculate the % of calories from fat for the 6 foods in the chart
at the bottom of the page.
2. Please research 10 ways to reduce fat in the foods you make (#11).
3. Please change the recipe (#12), that will lower the fat, and make the finished product
When the above is complete, please watch this video (does not need sound, but it does
have a musical background):
at the bottom of your Fats Notetaker, please write a 5+ sentence response to the video
Remember to hand in your Fats Notetaker to Ms. Barker at the end of class for marks.
Have a great class! | http://blogs.wsd1.org/gordonbell-bizteched/ | 13 |
21 | Central banks have emerged over the past four centuries when mankind moved from a system of gold or silver backed currencies to private issuers to fiat money. The first central bank in the world was the Swedish Riksbank, founded in 1668.
Scottish businessman William Paterson founded the Bank of England in 1694 on request of the British government to finance a war.
The First Bank of the USA was founded in 1791 and had a 20-year charter. It was however revived again in 1816 and gave birth to the Second Bank of the United States. This desperate move to stabilize the currency by US president James Madison was later revoked by US president Andrew Jackson who withdrew the bank's Federal Charter in 1836. In 1841 the Second Bank of the United States ceased all operations.
The history of central banking came alive again in 1913 with the constitutionally disputed foundation of the Federal Reserve.
As nearly all currencies in the world have transformed into fiat money over the past 4 centuries, i.e. the notes mandated to be used by government fiat, all countries have some sort of central bank that is responsible for keeping inflation low and provide a money supply that does not overshoot economic growth too much. Other tasks vary widely from country to country.
Central banks primarily purchase short-term debt issued by the governments of the countries in which they serve. Sometimes they've been used as a political piggy bank with the central banks buying riskier investments causing huge losses and inflation if the bank's government doesn't bail them out since the bank has no assets to sell to counter inflation, leaving to many banknotes in circulation. | http://www.wikinvest.com/wiki/Central_bank | 13 |
29 | |Ghana Table of Contents
Early European Contact and the Slave Trade
When the first Europeans arrived in the late fifteenth century, many inhabitants of the Gold Coast area were striving to consolidate their newly acquired territories and to settle into a secure and permanent environment. Several immigrant groups had yet to establish firm ascendancy over earlier occupants of their territories, and considerable displacement and secondary migrations were in progress. Ivor Wilks, a leading historian of Ghana, observed that Akan purchases of slaves from Portuguese traders operating from the Congo region augmented the labor needed for the state formation that was characteristic of this period. Unlike the Akan groups of the interior, the major coastal groups, such as the Fante, Ewe, and Ga, were for the most part settled in their homelands.
The Portuguese were the first to arrive. By 1471, under the patronage of Prince Henry the Navigator, they had reached the area that was to become known as the Gold Coast because Europeans knew the area as the source of gold that reached Muslim North Africa by way of trade routes across the Sahara. The initial Portuguese interest in trading for gold, ivory, and pepper so increased that in 1482 the Portuguese built their first permanent trading post on the western coast of present-day Ghana. This fortress, Elmina Castle, constructed to protect Portuguese trade from European competitors and hostile Africans, still stands.
With the opening of European plantations in the New World during the 1500s, which suddenly expanded the demand for slaves in the Americas, trade in slaves soon overshadowed gold as the principal export of the area. Indeed, the west coast of Africa became the principal source of slaves for the New World. The seemingly insatiable market and the substantial profits to be gained from the slave trade attracted adventurers from all over Europe. Much of the conflict that arose among European groups on the coast and among competing African kingdoms was the result of rivalry for control of this trade.
The Portuguese position on the Gold Coast remained secure for almost a century. During that time, Lisbon leased the right to establish trading posts to individuals or companies that sought to align themselves with the local chiefs and to exchange trade goods both for rights to conduct commerce and for slaves whom the chiefs could provide. During the seventeenth and eighteenth centuries, adventurers--first Dutch, and later English, Danish, and Swedish-- were granted licenses by their governments to trade overseas. On the Gold Coast, these European competitors built fortified trading stations and challenged the Portuguese. Sometimes they were also drawn into conflicts with local inhabitants as Europeans developed commercial alliances with local chiefs.
The principal early struggle was between the Dutch and the Portuguese. With the loss of Elmina in 1642 to the Dutch, the Portuguese left the Gold Coast permanently. The next 150 years saw kaleidoscopic change and uncertainty, marked by local conflicts and diplomatic maneuvers, during which various European powers struggled to establish or to maintain a position of dominance in the profitable trade of the Gold Coast littoral. Forts were built, abandoned, attacked, captured, sold, and exchanged, and many sites were selected at one time or another for fortified positions by contending European nations.
Both the Dutch and the British formed companies to advance their African ventures and to protect their coastal establishments. The Dutch West India Company operated throughout most of the eighteenth century. The British African Company of Merchants, founded in 1750, was the successor to several earlier organizations of this type. These enterprises built and manned new installations as the companies pursued their trading activities and defended their respective jurisdictions with varying degrees of government backing. There were short-lived ventures by the Swedes and the Prussians. The Danes remained until 1850, when they withdrew from the Gold Coast. The British gained possession of all Dutch coastal forts by the last quarter of the nineteenth century, thus making them the dominant European power on the Gold Coast.
During the heyday of early European competition, slavery was an accepted social institution, and the slave trade overshadowed all other commercial activities on the West African coast. To be sure, slavery and slave trading were already firmly entrenched in many African societies before their contact with Europe. In most situations, men as well as women captured in local warfare became slaves. In general, however, slaves in African communities were often treated as junior members of the society with specific rights, and many were ultimately absorbed into their masters' families as full members. Given traditional methods of agricultural production in Africa, slavery in Africa was quite different from that which existed in the commercial plantation environments of the New World.
Another aspect of the impact of the trans-Atlantic slave trade on Africa concerns the role of African chiefs, Muslim traders, and merchant princes in the trade. Although there is no doubt that local rulers in West Africa engaged in slaving and received certain advantages from it, some scholars have challenged the premise that traditional chiefs in the vicinity of the Gold Coast engaged in wars of expansion for the sole purpose of acquiring slaves for the export market. In the case of Asante, for example, rulers of that kingdom are known to have supplied slaves to both Muslim traders in the north and to Europeans on the coast. Even so, the Asante waged war for purposes other than simply to secure slaves. They also fought to pacify territories that in theory were under Asante control, to exact tribute payments from subordinate kingdoms, and to secure access to trade routes--particularly those that connected the interior with the coast.
It is important to mention, however, that the supply of slaves to the Gold Coast was entirely in African hands. Although powerful traditional chiefs, such as the rulers of Asante, Fante, and Ahanta, were known to have engaged in the slave trade, individual African merchants such as John Kabes, John Konny, Thomas Ewusi, and a broker known only as Noi commanded large bands of armed men, many of them slaves, and engaged in various forms of commercial activities with the Europeans on the coast.
The volume of the slave trade in West Africa grew rapidly from its inception around 1500 to its peak in the eighteenth century. Philip Curtin, a leading authority on the African slave trade, estimates that roughly 6.3 million slaves were shipped from West Africa to North America and South America, about 4.5 million of that number between 1701 and 1810. Perhaps 5,000 a year were shipped from the Gold Coast alone. The demographic impact of the slave trade on West Africa was probably substantially greater than the number actually enslaved because a significant number of Africans perished during slaving raids or while in captivity awaiting transshipment. All nations with an interest in West Africa participated in the slave trade. Relations between the Europeans and the local populations were often strained, and distrust led to frequent clashes. Disease caused high losses among the Europeans engaged in the slave trade, but the profits realized from the trade continued to attract them.
The growth of anti-slavery sentiment among Europeans made slow progress against vested African and European interests that were reaping profits from the traffic. Although individual clergymen condemned the slave trade as early as the seventeenth century, major Christian denominations did little to further early efforts at abolition. The Quakers, however, publicly declared themselves against slavery as early as 1727. Later in the century, the Danes stopped trading in slaves; Sweden and the Netherlands soon followed.
The importation of slaves into the United States was outlawed in 1807. In the same year, Britain used its naval power and its diplomatic muscle to outlaw trade in slaves by its citizens and to begin a campaign to stop the international trade in slaves. These efforts, however, were not successful until the 1860s because of the continued demand for plantation labor in the New World.
Because it took decades to end the trade in slaves, some historians doubt that the humanitarian impulse inspired the abolitionist movement. According to historian Walter Rodney, for example, Europe abolished the trans-Atlantic slave trade only because its profitability was undermined by the Industrial Revolution. Rodney argues that mass unemployment caused by the new industrial machinery, the need for new raw materials, and European competition for markets for finished goods are the real factors that brought an end to the trade in human cargo and the beginning of competition for colonial territories in Africa. Other scholars, however, disagree with Rodney, arguing that humanitarian concerns as well as social and economic factors were instrumental in ending the African slave trade.
Source: U.S. Library of Congress | http://countrystudies.us/ghana/6.htm | 13 |
55 | The impact of colonialism
From voyages of trade and discovery to colonisation:
This section of the grade 10 curriculum was developed in 2009. While much of the content is still relevant to the new curriculum, the focus is slightly different. However, it provides for great further reading. In this section you will look at how the expansion of European trade led to the establishment of fortified trading stations and eventually permanent European settlements in the Americas, Africa and India.
Early European voyages of trade and discovery
The powerful Ottoman Empire blocked European access to markets in the East. The Ottoman Turks controlled trade routes to the East.
The main reason why Europeans began to search for a sea route to the East was to avoid paying expensive customs duties, or taxes. The rulers of every country between India and Europe charged a tax on the spice shipments as the goods passed through their land. Europeans used spices such as salt, nutmeg and cloves to preserve their meat, as they did not have refrigerators to keep meat fresh.
During the fifteenth century the Portuguese began to explore the west coast of Africa. They established trading stations and began trading in gold and slaves in competition with the inland trans-Saharan trade routes.
Diogo Cão reached the mouth of the Congo River in 1483 and Cape Cross in 1486. Bartholomeu Dias was the first European to sail around the southern tip of Africa. He reached Mossel Bay in 1488 and on his way back to Portugal saw the Cape peninsular for the first time and named it the 'Cape of Storms' because of the bad weather the ships experienced there.
King John II of Portugal was so pleased when he heard the news that he renamed it the 'Cape of Good Hope'. In late 1497, the Portuguese navigator Vasco da Gama sailed around the Cape of Good Hope using Dias's navigational charts. He stopped at several places along the east coast of Africa. At the port of Malindi, he found an experienced Arab navigator, Ibn Majid, who joined the expedition and showed him the sea route to India across the Indian Ocean. They arrived in Calicut in 1498.
While the Portuguese were looking for a route to India around Africa, the Spanish were looking for a western route. Sponsored by King Ferdinand and Queen Isabella of Spain, the Italian navigator, Christopher Columbus, sailed west across the Atlantic in an effort to reach Asia in 1492.
Columbus based his voyage on his calculation of the earth's size (which later turned out to be wrong). He reached the Caribbean islands off what would later be called North and South America. He was convinced he had found the East Indies. Columbus claimed San Salvador, Cuba and Hispaniola for the Spanish crown where he established trading stations to finance his voyages.
The attempts by Columbus and da Gama to find new trade routes to the East encouraged exploration in other areas. King Henry VII of England sponsored John Cabot's exploration of a north-western route to the East. In 1497 he discovered Newfoundland and Nova Scotia. At the same time Amerigo Vespucci claimed to have discovered a 'New World' in 1497 when he landed on the continent of South America. These discoveries resulted in European colonisation during the sixteenth and seventeenth centuries.
The role of companies
A number of companies were formed in Europe during the seventeenth and eighteenth centuries to further expand trade with the East. These were formed by merchant adventurers who travelled to the East after the discovery of the Cape sea route. These companies were given charters to trade by the governments of their countries. This meant that the rulers of Europe were not directly involved in trade. They did, however, support the companies and welcomed the increased wealth that trade brought to the economy.
A charter is a document giving authority to companies to take over and control other areas. This control included various functions of government such as making laws, issuing currency, negotiating treaties, waging war and administering justice. These were the most important of these chartered companies:
- Danish East India Company;
- English East India Company;
- French East India Company.
Colonisation is the process of acquiring colonies. European powers took over land by force and then settled European people on the land. The conquered land then became known as a colony. Imperialism is a policy of extending a country's power and influence through colonisation, use of military force, or other means.
Over the past 500 years there have been different phases of colonisation.
In the early stages of colonisation, colonies were mainly trading stations. At first, Portugal and Holland were more interested in trade than settling people in colonies. They built forts along the coastline of Africa and Asia to protect their trade and did not try to control land in the interior. As the colonial trade became more competitive, trading stations grew into colonies of settlement.
Colonies of settlement
During the phase of colonial settlement, European countries sent settlers to inhabit and control large areas of land. They took complete control of new areas by force and imposed European laws. These settlers often excluded indigenous inhabitants from their society or killed many of them in violent wars or through disease. In the Americas, many Native Americans died from diseases that were brought to their land by Europeans. Examples of settlement colonies include English colonies in parts of the United States, Canada and Australia.
Colonies of exploitation
Colonies of exploitation did not attract large numbers of permanent European settlers. Small numbers of Europeans went to these colonies mainly to seek employment as planters, administrators, merchants or military officers. In exploitation colonies, the colonisers used force to crush resistance and maintain control. They did not displace or kill indigenous societies; instead they made use of their labour. Colonies of exploitation included Indonesia and Malaya in South-East Asia, and Nigeria and Ghana in West Africa.
Contested settlement colonies
In a contested settlement colony, a large number of Europeans permanently settled in the colony. In America, settlers started their own government and cut ties with their country of origin. In some cases the indigenous population not only resisted but increased in size and their labour remained the backbone of the economy, as was the case in South Africa. However, when the United States of America broke away from Britain, the indigenous population was virtually wiped out and slave labour had to be imported to do the work.
In informal empires, Europeans had influence over the rulers of the country without taking control of it. During the nineteenth century, individual Western nations called parts of China their sphere or area of influence. These Western nations even required that disagreements involving Europeans in these areas be judged according to Western laws in Western courts.
Reasons for colonisation
A quick way to remember the main reasons for establishing colonies is 'gold, God and glory', but you need to understand each reason in more detail.
Colonies were important sources of raw materials (such as raw cotton) and markets for manufactured goods (such as textiles). The colonising country could prevent competitors from trading with its colonies. This is known as a trade monopoly. The exploitation of mineral and other resources provided great wealth for the colonising country. Gold, in particular, was a highly sought-after commodity. Individual investors saw opportunities to make personal fortunes by helping to finance the establishment of colonies. Both slavery and colonisation provided cheap labour which increased profits and added to the wealth of the colonisers.
Europeans believed that it was their duty to spread Christianity among 'heathens' (non-believers) in other countries of the world. Both Roman Catholic and Protestant missionaries were sent to remote areas in order to convert people to Christianity. Missionaries also offered the indigenous people Western education and medical care, which they believed were better than those offered by traditional teachers and healers. They believed they were doing God's work and helping to 'civilise' the rest of the world. They were known as humanitarians because they were concerned about the welfare of their fellow human beings. Unfortunately, many greedy and ruthless people hid behind religion to disguise what they were actually doing - destroying whole cultures and civilisations so that they could have control over the people and their land.
Countries with large empires were respected and admired. Increased wealth resulted in greater military and political power. A small country like England became one of the most powerful empires in the world by taking over large areas of land and dominating international trade. Competition and rivalry among the colonial powers often resulted in war, as they tried to take over each other's colonies.
Certain colonies were acquired for their strategic importance. This means that they were well positioned in times of war. They also enabled the colonisers to control trade routes. The settlement at the Cape is a good example of a strategic reason for acquiring a colony. As long as the Dutch controlled the Cape, they controlled the sea route to the East. The Dutch built a fort on the Cape peninsula to defend the colony against attack from rival colonial powers.
Conquest, warfare and early colonialism in the Americas
The Caribbean Islands
On his first voyage, Columbus claimed San Salvador, Cuba and Hispaniola as Spanish possessions. He built a fort and left behind Spanish soldiers to hunt for gold on Hispaniola, while he returned to Spain (These men were later murdered by the inhabitants of the island for mistreating them).
On his second voyage, Columbus took a thousand Spanish colonists to settle in Hispaniola. This was the first European colony in the 'New World'. These colonists fought among themselves and with the inhabitants of the island. They were greedy and complained that there was not enough gold to make them all rich. They were given land and allowed to force the indigenous people to work for them, but they were still not satisfied. The colonists were also responsible forintroducing foreign epidemic diseases such as influenza, smallpox, measles and typhus, which drastically reduced the indigenous population in the Caribbean within 50 years.
The American mainland
In the early 1500s the Spanish began to conquer the mainland of Central and South America. Vasco Núñez de Balboa, a Spanish merchant, was considered the first of the conquistadors. Balboa is best known as the first European to see the Pacific Ocean. However, his expedition did not end well as one of his rivals, the newly appointed governor of Darien (Panama) had him executed. Today, Panama honours Balboa by naming its monetary unit, the balboa, after him.
Conquering the Aztec Empire
You learnt about the wealthy and powerful Aztec Empire in the previous section. The following case studies will tell you more about how this mighty empire was destroyed by the Spanish.
Conquest Case Study 1
In 1519 the Spanish conquistador, Herman Cortés, led an expedition into central Mexico in search of land and gold. He arrived with five hundred men wearing armour. They brought with them cannons, mastiff dogs and sixteen horses. Cortés defeated the enemies of the Aztecs, the Tlaxcalans, and then formed an alliance with them in order to defeat the Aztecs. Thousands of Tlaxcalans who wanted to see the destruction of the Aztec Empire joined him as he rode to Tenochtitlán, the capital city.
The Aztec ruler, Emperor Montezuma II, greeted Cortés with gifts because he believed that he was the Aztec god, Quetzalcoatl, who had come from the sea. He allowed Cortés to enter the city in order to learn more about the Spaniards and their intentions. When the Spaniards saw large amounts of gold and other treasures, they captured the emperor and began to rule the empire. With the assistance of the Tlaxcalans, and after many bloody battles, the Spaniards eventually defeated the Aztecs in August 1521. The Spaniards conquered the remaining Aztecs and took over their lands, forcing them to work in gold mines and on Spanish estates.
The fall of Tenochtitlán marked the end of the Aztec civilisation, which had existed for centuries. The city was looted of all its treasures and then the buildings were blown up with barrels of gunpowder. On the ruins of Tenochtitlán, the Spaniards built Mexico City. The city's present-day cathedral rises over the ruins of an Aztec temple and the palace of the Mexican president stands on the site of the palace of Montezuma. The Spanish called their new colony in Mexico 'New Spain'.
Conquering the Inca Empire
Conquest Case Study 2
Francisco Pizarro was the Spanish conqueror of Peru. He left Spain for the West Indies in 1502 and lived on the island of Hispaniola. He was also part of Balboa's expedition to the Pacific Ocean. Pizarro heard tales of a southern land rich in gold. During the 1520s Pizarro led two expeditions down the west coast of South America and saw the golden ornaments worn by Native Americans of the Inca Empire of Peru. He got permission from the emperor of the King of Spain, Charles V to conquer this land and become its governor. Pizarro raised an army of 180 men to take with him to Peru. Atahualpa, the Inca, or emperor, was captured by the Spaniards, who held him hostage. His followers were tricked into paying a large ransom of silver and gold. Instead of sparing his life as promised, Pizarro executed Atahualpa on 29 August 1533 and took control of the town of Cajarmaca.
Pizarro then marched south and captured the Inca capital at Cuzco. After looting Cuzco, the Spaniards went on to establish control over the rest of the land of the Incas. Without an emperor to lead them, the Incas found it hard to resist the Spanish invasion. They were divided among themselves and their weapons were no match for the guns of the Spaniards. Only one Inca community, which was high up in the mountains and difficult to reach, held out against the conquistadors. It survived as the last Inca stronghold until the Spanish conquered it in 1572 and executed its ruler, Tupac Amarú.
In 1535, Pizarro set up a new capital at Lima and, as governor, was responsible for bringing many settlers to Peru. Most settlers were involved in mining the vast amounts of silver and gold that existed in Peru. The Spanish were allowed to force the Incas to work for them for low wages. They used forced labour in the army, to build new cities and to mine silver and gold.
You have already heard that conquistadors often fought among themselves. Diego de Almagro, Pizarro's former partner, fought with Pizarro over Cuzco. The power struggle between Pizarro and Almagro led to the War of Las Salinas in 1538. Almagro was executed, but his son, known as Almagro the Lad, continued the war. Pizarro was murdered in his palace in Lima by followers of Almagro in 1541.
Resistance to Spanish colonialism
The Aztec and Inca Empires covered very large areas and consisted of millions of people. It was only after long and bloodied battles that they gave up their capitals to the invaders. The European diseases that reduced the population of the indigenous people of the Caribbean islands also affected the Aztecs, and to a lesser degree the Incas.
The Spanish were less successful against the people who occupied other areas of Central and South America. These people attacked unexpectedly and took advantage of the fact that they outnumbered the Spanish. In 1542 the Spanish founded the city of Mérida in the north-western corner of Mexico, but they controlled only some of the areas around this city. The biggest part of the peninsula was still ruled by Mayan communities.
Resistance Case Study 3
The Spanish encountered particularly fierce resistance from the Auracanian tribes. After the conquest of the Inca Empire, a Spanish force moved southward to found the city of Santiago in 1541. They gained control over the fertile central region of present-day Chile. The Araucanians lived in the southern part of Chile, and resisted Spanish control until well into the nineteenth century. The Spanish built a line of forts to defend their settlements against continuous Araucanian attacks and raids. The Araucanians adapted to the European style of warfare by making spears to fight the Spanish while they were on their horses. The Araucanians were finally defeated at the end of the 1870s and forced to live in reservations.
Resistance Case Study 4
A distinct type of resistance in exploitation colonies was the slave revolt. The most dramatically successful was the Haitian Slave Revolt, on the Caribbean island of Hispaniola, led by Francois Dominique Toussaint Louverture. The revolt, lasted from the early 1790s until 1804, when Haiti received its independence. There were many other slave revolts throughout the Caribbean and Brazil. Some of these revolts failed and many slaves who had participated in revolts were brutally tortured and executed.
The legacy of the Spanish in Central and South America
- Disease and forced labour drastically reduced the population of Central America. It is estimated that the population of Mexico was reduced by ninety per cent in the first fifty years after the arrival of the Spanish.
- In Central and South America, the Spanish settlers eventually intermarried with the Incas and Aztecs as most of the settlers were men. The people of mixed racial descent are known as mestizo and now form the majority of the population.
- The official language of the former Spanish colonies in the Americas is Spanish but there are many people who still speak their indigenous languages.
- The indigenous people were also converted to Catholicism which remains the dominant religion in Central and South America.
Colonialism in Africa
Early colonialism in Africa, Portuguese trading stations in West Africa
Portuguese expansion into Africa began with the desire of King John I to gain access to the gold-producing areas of West Africa. The trans-Saharan trade routes between Songhay and the North African traders provided Europe with gold coins used to trade spices, silks and other luxuries from India. At the time there was a shortage of gold and rumours were spreading that there were states in the south of Africa which had gold. This news encouraged King John's son, Prince Henry, to send out expeditions to explore these possibilities.
At first, the Portuguese established trading stations along the west coast of Africa rather than permanent settlements. They built forts at Cape Blanco, Sierra Leone and Elmina to protect their trading stations from rival European traders. In this way, the Portuguese diverted the trade in gold and slaves away from the trans-Saharan routes causing their decline and increased their own status as a powerful trading nation.
During the 1480s the Portuguese came into contact with the kingdom of the Kongo, situated south of the Congo river in what is today northern Angola. The Kongo became powerful through war and capturing and enslaving the people they defeated.
The Portuguese did not conquer this region but chose rather to become allies of the Kongo king. The king was eager to make use of Portuguese teachers and craftsmen to train his people. He also allowed Catholic missionaries to work among his people. The Portuguese traded guns for slaves captured by the Kongo in wars against rival kingdoms in the interior. Other than small amounts of copper and raffia cloth, the area did not provide any profitable trade in gold or silver, which was disappointing for the Portuguese. The traffic in slaves more than made up for this disappointment.
In the 1490s sugar plantations were established on the islands of São Tomé and Principé. The Portuguese settlers on these islands used slaves bought from the Kongo traders to work on these plantations. Very soon São Tomé became the largest producer of sugar for Europe. When Brazil became a Portuguese colony in the 1530s, the demand for slaves to work on the sugar plantations established there increased. São Tomé became an important holding station for slaves before they left on the trans-Atlantic voyage to South America.
As the demand for slaves increased in Brazil, the São Tomé traders found a better supply of slaves further south near Luanda and Benguela. Wars fought in this region provided a constant supply of slaves. In exchange for slaves, the Portuguese provided the Ndongo and Lunda kings with guns, cloth and other European luxuries. The guns enabled the kings to defeat their enemies and maintain a dominant position in the region.
In 1641, the Dutch seized the slave trade in Angola away from the Portuguese and they were able to control it until 1648 when the Portuguese took back control again. Angola only became a Portuguese colonial settlement after the decline of the slave trade in the nineteenth century.
The legacy of the Portuguese in western-central Africa
- The Portuguese introduced agricultural products grown in South America such as maize, sugar cane and tobacco. Coffee plantations were introduced to Angola in the nineteenth century. Coffee is one of Angola's major exports today.
- The Portuguese introduced guns to the region which changed the nature of warfare and enabled their allies to dominate other kingdoms.
- The Portuguese encouraged wars between rival kingdoms to maintain a constant supply of slaves. The result of this was that the region was constantly at war and millions of young people, mainly men, were forced to leave Africa and work as slaves in the Americas.
- The Portuguese language is mainly spoken in urban areas of Angola today. However, the indigenous languages have survived among the rural population.
- In modern Angola, about ninety per cent of the population is Christian, mainly Catholic, as a result of Portuguese missionary activity in the area. The remainder of the population follows traditional African religions.
Portuguese trading stations in East Africa
Well-established gold and ivory trade network existed between African kingdoms in the interior and cities on the east coast of Africa. For centuries Arabs had traded with African kingdoms such as Great Zimbabwe and Mwanamutapa in order to supply Arabia, the Persian Gulf, India and even China with African ivory and gold. The Arab settlers intermarried with the indigenous African people living along the east coast. They introduced Islam and influenced the development of the Swahili language. A new coastal society emerged that was a mixture of African and Islamic traditions. This prosperous society built beautiful cities along the coastline from where they conducted trade with Arab merchants. The most important of these cities were Zanzibar, Kilwa, Mombasa, Mozambique Island and Sofala.
In the sixteenth century the Portuguese drove the Arabs away from the east coast of Africa and established their own trade monopoly in the region. They arrived with heavily-armed ships and demanded that the Muslim sultans (or rulers) accept the authority of the king of Portugal by paying a large tribute. If they refused to do this, the cities were looted and destroyed. The Portuguese regarded this as a continuation of the 'holy Christian war' they had been fighting against the Muslims in Europe for centuries.
Zanzibar was the first of these cities to be attacked in 1503. The city was bombarded with canon fire from the ships of Portuguese captain, Ruy Lourenço Ravasco. In 1505, Francisco d'Almeida arrived with eleven heavily-armed ships that destroyed Kilwa, Mombasa and Barawa. To strengthen their position along the coast the Portuguese erected massive stone fortresses in Kilwa, Sofala, Mozambique Island and Mombasa. These fortresses enabled them to control the trade in the western Indian Ocean as well as the trade with the African kingdoms in the interior.
From Sofala they conducted trade in ivory, gold and slaves with the Mwanamutapa kingdom. Trading stations were also established at Quilimane north of Sofala, and at Sena and Tete along the Zambezi River. Further south Lourenco Marques was sent to Delagoa Bay to establish trade with the indigenous people living there.
The Portuguese control of the Indian Ocean trade
The Portuguese did not have an easy time on the east coast of Africa. They found the climate inhospitable and many died of tropical diseases. They were also constantly attacked by hostile inhabitants of the area and were unable to conquer the interior of Africa. They managed to keep control by making alliances with warring clans and promising to help them against their enemies.
The Portuguese rulers believed it was their duty to spread the Catholic religion. Missionary activity began in 1560. Both the Jesuits and Dominicans were active in converting Africans to Catholicism. They even managed to convert one of the heirs to the Mwanamutapa dynasty who gave up his right to be king and joined a convent in Santa Barbara in India.
By the early sixteenth century the Portuguese had established a string of bases in Asia, including Hormuz at the tip of the Persian Gulf; Goa on the west coast of India and the Straits of Molucca in the East Indies.
From these bases, the Portuguese could control the sea-going trade of the entire western Indian Ocean. However, Portugal was mainly a maritime power; it was not able to defeat other military powers. When larger European nations like the Dutch, English and French arrived in the area, Portuguese power and control ended, and by 1650 they only had control in ports such as Delagoa Bay, Mozambique Island and Mombasa. Mozambique (Portuguese East Africa) was only recognised as a Portuguese colony by the other European powers in 1885.
The Portuguese legacy in East Africa
- The Portuguese destroyed the Arab trade routes in the Indian Ocean between Africa, Arabia and India.
- The Portuguese replaced Arab control of the trade in ivory, gold and slaves with their own.
- They traded up the Zambezi river and interfered with the existing inland African trade. Only kingdoms that co-operated with the Portuguese benefited from this interference.
- Portuguese is still spoken in Mozambique, but the majority of the rural population speaks one of the indigenous Bantu languages.
- Only thirty per cent of the population is Christian, mostly Catholic. The majority of the population practise traditional African religions or no religion at all.
The Dutch in Southern Africa
The Dutch challenged Portuguese domination of the Indian Ocean trade in the late sixteenth century when they began trading in spices, calico and silks in the East and gold, copper, ivory and slaves in Africa. In the seventeenth and early eighteenth centuries the Netherlands became the wealthiest European trading nation, until Britain challenged them in the eighteenth and nineteenth centuries.
The Dutch East India Company (known by the Dutch abbreviation VOC) was established in 1602 to conduct Dutch trade with the East Indies. Its headquarters were in Jakarta on the island of Java. Because the journey to the East took so long, European shipping nations stopped at the Cape of Good Hope to collect fresh water and food. The Khoikhoi people at the Cape traded sheep, cattle, ivory, ostrich feathers and shells for beads, metal objects, tobacco and alcohol. Unlike the Portuguese, the Dutch did not trade guns as they did not want the Khoikhoi to use the guns against them.
In 1652, the VOC decided to establish a permanent refreshment station at the Cape. Jan van Riebeeck was appointed commander of this station. It was his responsibility to build a fort for their protection and a hospital for sick sailors. Employees of the company planted vegetables and obtained meat from the Khoikhoi so that they could supply the ships as they called in at Table Bay. French and English ships were also allowed to stop at the Cape, but they were charged very high prices.
Expansion of the Dutch settlement
Increasingly the Khoikhoi lost land and cattle to the Dutch as the settlement grew. This brought the Dutch into conflict with the powerful Cochoqua chief, Gonnema, who refused to trade with the VOC. The Company used rival Khoikhoi clans to raid the Cochoqua herds between 1673 and 1677. This is known as the Second Khoikhoi-Dutch War. The Cochoqua were defeated and lost all their cattle and sheep to the Dutch and their Khoikhoi allies. The boers then settled on their land.
Wheat and grapes for wine were grown in this area for the settlement and for export to the passing ships. The settlers were sold slaves from Madagascar, Mozambique and Indonesia to work the land.
As the settlement grew, some of the farmers became hunters and cattle farmers in the interior of the Cape. They were known as 'trekboers' because they lived in ox-wagons and were always on the move. They were granted large pieces of land each and allowed their cattle to graze on the land until it was overgrazed and then they would move on.
In the 1680s and 1690s the VOC encouraged Dutch and French Huguenot immigration to the Cape. The new arrivals were settled in the fertile valleys of Paarl, Stellenbosch and Franschhoek. Wheat and grapes for wine were grown in this area for the settlement and for export to the passing ships. The settlers were sold slaves from Madagascar, Mozambique and Indonesia to work the land.
Khoikhoi resistance in the interior
The Khoikhoi were at a disadvantage in their struggle to resist the expansion of the Dutch settlement at the Cape. They had no guns or horses and were nearly wiped out by a series of smallpox epidemics that swept through the Cape starting in 1713. Like the Aztecs in Mexico, they had no immunity against European diseases and they died in their thousands.
The Khoikhoi found different ways to resist Dutch expansion. At first they resisted by attacking and raiding Dutch farms. In reaction, the trekboers formed themselves into military groups called 'commandos' and attacked the Khoikhoi in order to get back their cattle. As a result, hundreds of Khoikhoi people were killed. As soon as the commandos returned to their farms, the Khoikhoi attacked again, setting in motion a continuous cycle of attack and counter-attack.
In the end the Khoikhoi had two options. Either they could move into more remote and drier regions of the expanding colony or else they could become servants of the boers acting as trackers, herdsmen and shepherds. Some even joined boer commandos and attacked other Khoikhoi groups. The boers were not allowed to enslave the indigenous people of South Africa, so these Khoikhoi servants remained free citizens, but they were seldom paid wages. They were usually paid in food, clothing, housing, brandy and tobacco. They were sometimes allowed to keep cattle, but they lost their independence and with that much of their culture and language. In the Eastern Cape, many Khoikhoi people were absorbed into Xhosa society.
The impact of Dutch rule at the Cape
- The arrival of Dutch settlers marked the permanent settlement of Europeans in Southern Africa.
- Dutch laws, customs and attitudes towards race were brought to South Africa and Dutch people became the ruling class until the Cape was taken over by the British in 1806.
- The Dutch did not actively encourage the Khoikhoi or slaves to become Christians as this would imply they were equal.
- The process of land dispossession by indigenous people in South Africa began soon after the arrival of the Dutch and lasted until 1994.
- Racial mixing occurred at the Cape, but it was never openly accepted like it was in colonies such as Brazil and Mexico. A few legal marriages did occur between different races, but most of the relationships across race lines were between European men and their female slaves or Khoikhoi servants. The children of these relationships formed part of what is known today as the Cape Coloured community.
- Freed slaves were also included into the Cape Coloured community. Many of the freed slaves were Muslims and maintained their Malay cultural and religious traditions.
- The Dutch language became simplified as it was spoken by the multi-cultural community that existed at the Cape. Portuguese, Malay and Khoikhoi words were included in the common language now spoken, which became known as 'Afrikaans'.
European control of India Britain takes control of India
In May 1498 Vasco da Gama sailed into the harbour of Calicut (now Kozhikode) on the Malabar Coast of India. The Portuguese dominated the trade routes on the coast of India during the sixteenth century. The Dutch forced the Portuguese out of India in the seventeenth century. The Dutch East India Company was soon followed by the English East India Company. Both companies began by trading in spices, but later shifted to textiles. They operated mostly on the southern and eastern coasts of India and in the Bengal region. The French also joined trade in India in about 1675.
The English East India Company founded trading stations known as factories at Surat (1612) and Madras (1639) (now Chennai) under the authority of the Mogul Empire. Rapid growth followed, and in 1690 the company set up a new factory further up the Hugli river, on a site that became Calcutta (now Kolkata). By 1700 the company had extended its trading activities in Bengal and used this as a reason to involve itself in Indian politics.
As the French and British were fighting over the control of India's trade, the Mogul Empire was experiencing serious problems and regional kingdoms were becoming more powerful. The emperor, Aurangzeb, was a harsh ruler who did not tolerate the Hindu population and often destroyed their temples. He tried to force Indians to become Muslims against their will. As a result of this he was not a popular ruler. Soon after his death in 1707, the empire began to disintegrate.
The French and British took advantage of the weakness of the Mogul empire. They offered military support to the regional rulers who were undermining the empire. The British and the French kept increasing their own political or territorial power while pretending to support a specific local or regional ruler. By 1750 the French managed to place themselves in a powerful position in southern India, but a year later British troops took the French south-eastern stronghold by force.
In Bengal, the English East India Company strengthened Fort William in Calcutta (now Kolkata) to defend itself against possible attacks by the French. This area was part of the Mogul Empire and its emperor attacked Calcutta in 1756. After this attack the British governor moved north from Madras and secretly conspired with the commander of his enemy's army. The Mogul emperor was defeated at Plassey by the Company troops under the command of Robert Clive in 1757.
The French attempted to regain their position in India but were forced to give up Pondicherry in 1760. In 1774 the British again defeated local rulers and firmly established British control over the Bengal region.
Resistance in India
Between 1800 and 1857 the English East India Company extended British control by fighting wars against Afghanistan, Burma, Nepal, the Punjab and Kashmir. They made use of both Indian and British soldiers to gain more land.
The Indian population did not like British rule. This led to the Sepoy Rebellion of 1857, in which Indian soldiers (called sepoys) staged an armed uprising. The rebellion failed because it lacked good leaders and did not have enough support. The uprising did not upset British rule, but many lives were lost during this rebellion. The British then focused on governing efficiently while including some traditional elements of Indian society. After 1858 India was no longer controlled by the East India Company and was brought directly under British rule instead.
Britain did not control the whole of India at this time. Many princes signed treaties with the British and agreed to co-operate with the British. In other areas the British appointed Indians as princes and put them in charge. In this way Britain ruled the so-called Indian States indirectly. Queen Victoria of Britain appointed a viceroy to rule India.
The impact of British rule on India
- Colonial empires became rich and powerful as their empires grew in size. However, colonies were expensive to run, especially if wars were involved. Wars were fought between rival empires who wanted the same land or to defeat rebellious indigenous inhabitants.
- Europe, in particular Britain, was able to industrialise because of raw materials obtained from colonies and because colonies provided markets for manufactured goods. Slavery did not start because of colonialism; slavery has always existed. However, European powers were able to exploit their colonies and increase their wealth by using slave labour or very cheap indigenous labour.
- Colonialism did not cause racism, but it helped to reinforce the belief that Europeans were the dominant race and therefore superior and that other races were subordinate and therefore inferior.
- On the other hand, colonialism provided opportunities for people of different races, religions and cultures to meet, live and work together. The result of this has been an exchange of ideas, technology and traditions.
- The spread of Christianity throughout the world was made possible by missionary activities. This was assisted by the expansion of European colonial empires.
- Church and state worked together to change the indigenous belief systems of the people they ruled. Colonial expansion also brought Christianity into conflict with Islam as European powers challenged Muslim rulers and traders.
European domination of the world
The expansion of European trade resulted in the colonisation of five continents over a period of five centuries. Using military force, each of the European colonial powers dominated world trade at different times. When one colonial power became weak, another challenged it and replaced it as the dominant power.
What was the effect of colonialism?
- Britain dominated trade in India after the collapse of the Mogul empire.
- The British maintained political control through military force.
- The British ruled India by controlling the regional rulers.
- The British built a railway system throughout India and introduced a telegraph and telephone system.
- Only two per cent of the Indian population can speak English. It is the language used by educated businessmen and politicians. The official language of India is Hindi but there are more than a thousand languages spoken in India today.
- India is the largest democracy in the world today. Although they did not rule democratically, the British did leave this legacy to the country when they granted it independence in 1947. | http://www.sahistory.org.za/topic/impact-colonialism | 13 |
30 | Movement advocating the immediate end of slavery. The abolitionist movement began in earnest in the United States in the 1820s and expanded under the influence of the Second Great Awakening, a Christian religious movement that emphasized the equality of all men and women in the eyes of God. Most leading abolitionists lived in New England, which had a long history of anti-slavery activity, but the movement also thrived in Philadelphia and parts of Ohio and Indiana.
Paint made with pigment (color) suspended in acrylic polymer (a synthetic medium), rather than in natural oils, such as linseed, used in oil paints. It is a modern medium that came into use in the 1950s. Unlike oil paint, it is fast drying and water soluble.
Three-dimensional art made by building up material (such as clay) to produce forms, instead of carving it away.
Type of photograph that is printed on paper coated with silver salts (the substance that turns dark when it is exposed to light in a camera) suspended in egg whites (albumen). Albumen prints were more popular than daguerreotypes, which they replaced, because multiple copies could be printed and they were less expensive. Albumen prints were often toned with a gold wash, which gives them a yellowish color.
Symbolic representation of an idea, concept, or truth. In art, allegories are often expressed through symbolic fictional figures, such as “Columbia,” a woman who represents America; or Father Time, an old man with an hourglass and scythe.
Type of photograph made between 1850 and 1860 in which a negative was attached to a piece of glass with black paper or cloth behind it. Against the black background, the tones of the resulting photograph are reversed, so that it reads as a positive image. The ambrotype went out of use when less expensive methods of photography were invented, like the albumen print.
Latin for “before the war.” It refers to the period between 1820 and 1860 in American history.
Term encompassing a range of ideas opposing slavery. It included abolitionism, or the idea that slavery should be ended immediately. But it also included other positions, including colonization and gradual emancipation. Some anti-slavery figures (like Abraham Lincoln) opposed slavery as a moral wrong, but did not seek to end it where it already existed, mostly because they believed that slavery was protected by the Constitution. Others had no moral concerns about slavery, but opposed the expansion of the institution because they believed that wage laborers could not compete in a slave-based economy.
Antrobus, John (1837–1907):
Sculptor and painter of portraits, landscapes, and genre scenes (showing everyday life). Antrobus was born in England but came to Philadelphia in 1850. During his travels through the American West and Mexico, he worked as a portraitist before opening a studio in New Orleans. He served briefly with the Confederate Army during the Civil War before moving to Chicago. Antrobus sculpted both Abraham Lincoln and Stephen Douglas and was the first artist to paint a portrait of Ulysses S. Grant (in 1863).
Army of the Potomac:
Largest and most important Union army in the Eastern Theater of the Civil War, led at various times by Generals Irvin McDowell, George McClellan, Ambrose Burnside, Joseph Hooker, and George Meade. From 1864–1865, General Ulysses S. Grant, then Commander-in-Chief of all Union forces, made his headquarters with this Army, though General Meade remained the official commander. The army’s size and significance to the war meant that it received a great deal of attention in newspapers and magazines of the day. Artist Winslow Homer lived and traveled with the army at various times when he worked for Harper’s Weekly as an illustrator.
Army of Northern Virginia:
Primary army of the Confederacy and often the adversary of the Union Army of the Potomac. Generals P. G. T. Beauregard and Joseph E. Johnston were its first leaders; after 1862 and to the end of the war, the popular General Robert E. Lee commanded it. On April 9, 1865, Lee surrendered his army to Union General-in-Chief Ulysses S. Grant in the small town of Appomattox Courthouse, effectively ending the Civil War.
Collection of weapons or military equipment. The term arsenal also refers to the location where weapons or equipment for military use is stored.
Discipline that seeks to understand how artworks were made, what history they reflect, and how they have been understood.
Surprise murder of a person. The term is typically used when individuals in the public eye, such as political leaders, are murdered.
Atkinson, Edward (1827–1905):
American political leader and economist who began his political career as a Republican supporter of the Free Soil movement. Atkinson fought slavery before the Civil War by helping escaped slaves and raising money for John Brown. After the Civil War, in 1886, Atkinson campaigned for future President Grover Cleveland and worked against imperialism (the movement to expand a nation’s territorial rule by annexing territory outside of the main country) after the Spanish-American War.
Ball, Thomas (1819–1911):
American sculptor who gained recognition for his small busts before creating more monumental sculptures. Notable works include one of the first statues portraying Abraham Lincoln as the Great Emancipator (1876), paid for by donations from freed slaves and African American Union veterans, which stands in Washington D.C.’s Lincoln Park. Ball also created a heroic equestrian statue of George Washington for the Boston Public Garden (1860–1864). He joined an expatriate community in Italy, where he received many commissions for portrait busts, cemetery memorials, and heroic bronze statues.
Barnard, George N. (1819–1902):
Photographer known for his work in daguerreotypes, portraiture, and stereographs. Barnard devoted much of his time to portraiture after joining the studio of acclaimed photographer Mathew Brady. He produced many group portraits of soldiers in the early years of the Civil War. Barnard was employed by the Department of the Army and traveled with General William T. Sherman, an assignment that would yield the 61 albumen prints that compose Barnard’s Photographic Views of Sherman’s Campaign. In the post-war years, he operated studios in South Carolina and Chicago, the latter of which was destroyed in the 1871 Chicago Fire.
Battle of Gettysburg:
Fought July 1–3, 1863, in and around the town of Gettysburg, Pennsylvania, this battle was a turning point in the Civil War. Union forces stopped Confederate General Robert E. Lee's second (and last) attempt to invade the North. The Union emerged victorious, but the battle was the war's bloodiest, with fifty-one thousand casualties (twenty-three thousand Union and twenty-eight thousand Confederate). President Abraham Lincoln delivered his famous "Gettysburg Address" in November 19, 1863, at the dedication of the Soldiers' National Cemetery at Gettysburg.
Bell, John (1797–1869):
Politician who served as United States Congressman from Tennessee and Secretary of War under President Harrison. On the eve of the Civil War in 1860, Bell and other people from Border States formed the Constitutional Union Party. Under its moderate, vague platform, the Constitutional Unionists stood for supporting the Constitution but preserving the Union through being pro-slavery but anti-secession. Bell lost the election, receiving the lowest percentage of the popular vote and only winning the states of Tennessee, Kentucky, and Virginia. During the Civil War, Bell gave his support to the Confederacy.
Bellew, Frank Henry Temple (1828–1888):
American illustrator who specialized in political cartoons and comic illustrations. Before, during, and after the Civil War, Bellew’s illustrations appeared in newspapers and illustrated magazines such as Vanity Fair and Harper’s Weekly. He is perhaps most famous for his humorous cartoon “Long Abraham Lincoln a Little Longer” and his image depicting “Uncle Sam” from the March 13, 1852, issue of the New York Lantern. His Uncle Sam illustration is the first depiction of that character.
Bierstadt, Albert (1839–1902):
German-American painter and member of the Hudson River School of landscape painting. Bierstadt spent time in New England and the American West and is well known for his large landscapes that highlight the scale and drama of their setting. A member of the National Academy of Design, he worked in New York City and had a successful career until near the end of his life when his paintings temporarily fell out of style.
Billings, Hammatt (1819–1874):
American artist, designer, and architect. Billings lived in Boston for the majority of his life, and designed several public buildings and monuments in the New England region. He became famous for his work as an illustrator. He illustrated over 100 books, including works by Nathaniel Hawthorne, Charles Dickens, and Harriet Beecher Stowe. His illustrations of Stowe’s 1852 novel Uncle Tom’s Cabin, were particularly well-regarded, and helped launch his successful career.
Bishop, T. B. (active, 19th century):
American photographer whose image of an escaped slave was turned into an illustration for the popular illustrated magazine Harper's Weekly.
Blythe, David Gilmour (1815–1865):
Sculptor, illustrator, poet, and painter best known for his satirical genre painting (showing everyday life). His work focused mainly on the American court system and the condition of poor young street urchins. Blythe also produced many politically-charged canvases supporting his Unionist views in the years leading up to and during the Civil War.
Booth, John Wilkes (1838–1865):
American stage actor who assassinated President Lincoln. Booth was active in the anti-immigrant Know-Nothing Party during the 1850s. He supported slavery and acted as a Confederate spy during the Civil War. In 1864, Booth planned to kidnap Lincoln and bring him to the Confederate government in Richmond, Virginia. But after the fall of Richmond to Union forces, Booth changed his mind, deciding instead to assassinate Lincoln, Vice President Andrew Johnson, and Secretary of State William Seward. On April 14, 1865, Booth shot Lincoln at Ford’s Theatre and then fled. Union soldiers found and killed Booth on April 26, 1865.
Slaveholding states that did not secede from the Union during the Civil War. Geographically, these states formed a border between the Union and the Confederacy, and included Delaware, Maryland, Kentucky, Missouri, and later, West Virginia (which had seceded from Virginia in 1861). Of these, Maryland, Kentucky, and Missouri were particularly important to Union war policy as each of these states had geographic features like rivers that the Union needed to control the movement of people and supplies. Most of the Border States had substantial numbers of pro-secession citizens who joined the Confederate army.
Borglum, John Guzton de la Mothe (1867–1941):
American sculptor and engineer best known for his Mount Rushmore National Memorial comprising monumental portraits of presidents Washington, Jefferson, Lincoln, and Roosevelt carved out of the mountain. Borglum began his career as painter but was dissatisfied with medium. He later studied at Académie Julian in Paris, where he was influenced by the bold sculptor Auguste Rodin. Borglum believed that American art should be grand in scale, like the nation itself. He received commissions for several monumental sculptures during his career, including a six-ton head of Lincoln and the 190-foot wide Confederate Memorial in Stone Mountain, Georgia.
Brady, Mathew (1823–1896):
American photographer, perhaps best known for his photographs of the Civil War. Brady studied under many teachers, including Samuel F. B. Morse, the artist and inventor who introduced photography to America. Brady opened a photography studio in New York City in 1844 and in Washington, D.C. in 1856. During the Civil War, he supervised a group of traveling photographers who documented the war. These images depicted the bloody reality of the battlefield. They convinced Americans that photography could be used for more than portraiture. Congress purchased his photographic negatives in 1875.
Breckinridge, John (1821–1875):
Democratic politician from Kentucky who served as a Congressman from Kentucky. He was Vice President of the United States under James Buchanan before running for president in 1860 as a Southern Rights Democrat. Breckinridge lost the election, winning only Deep South states. During the war, Breckinridge held the rank of Major General in the Confederate army and briefly served as the Confederate Secretary of War.
Bricher, Alfred Thompson (1837–1908):
American specialist in landscape, focusing on marine and costal paintings. Largely self-taught, Bricher studied the works of artists he met while sketching New England. Bricher had a relationship with L. Prang and Company, to which he supplied paintings that were turned into popular, inexpensive chromolithographs. During his career, Bricher worked in watercolor and oil paint and traveled through New England, the Mississippi River Valley, and Canada. His style moved from the precise detailed realism of his early career to a looser brush style that evokes romantic themes of loss and the power of nature.
Briggs, Newton (active, 19th century):
Photographer who created portraits of Abraham Lincoln and Hannibal Hamlin used as campaign ephemera.
A large printed poster used for advertising or for political campaigns. Broadsides were often inexpensively and quickly made, and intended to send a message rather than be a work of art.
Brown, John (1800–1859):
Radical abolitionist leader who participated in the Underground Railroad and other anti-slavery causes. As early as 1847, Brown began to plan a war to free slaves. In 1855 he moved to the Kansas territory with his sons, where he fought and killed proslavery settlers. In 1859, he led a raid on a federal arsenal in Harpers Ferry, Virginia, hoping to start a slave rebellion. After the raid failed, Brown was captured, put on trial, and executed for his actions. Brown was praised as a martyr by abolitionists, although the majority of people thought he was an extremist.
Metal sculpture made by pouring a molten alloy (metallic mixture) of copper and tin into a mold. The mold is removed when the metal has cooled, leaving the bronze sculpture. Bronzes are designed by artists but made at foundries.
Sculpture portraying only the top half of a person’s body: their head, shoulders, and typically their upper torso.
Buttre, John Chester (1821–1893):
New York City-based engraver who was responsible for publishing The American Portrait Gallery, a collection of biographies and images of notable American public figures. Buttre was partner in the firm of Rice & Buttre. He created sentimental images of Civil War which sold well.
Cade, John J. (active, 19th century):
Canadian-born engraver of portraits who worked for New York publishers. In 1890 he was living in Brooklyn, New York. Cade worked with illustrator Felix Octavius Carr Darley.
Representation in which a person’s traits are exaggerated or distorted. These are usually made for comic or satirical effect.
French term for “visiting card.” These small (usually 2 1/2 x 4 inches) photographs mounted on cardboard were so named because they resembled visiting or business cards. Exchanged among family members and friends, these first appeared in the 1850s and replaced the daguerreotype in popularity because they were less expensive, could be made in multiples, and could be mailed or inserted into albums.
Carter, Dennis Malone (1818–1881):
Irish-American painter of historical scenes and portraits. Carter worked in New Orleans before moving to New York City. He exhibited his paintings in art centers like New York and Philadelphia, and mainly became known for his paintings of historical scenes.
Carter, William Sylvester (1909–1996):
African American painter. Carter was born in Chicago and studied at the School of the Art Institute of Chicago. During the 1930s, he was involved with the Works Progress Administration, a jobs program that helped artists and other workers weather the Great Depression.
Copy of three-dimensional form, made by pouring or pressing substances such as molten metal, plaster, or clay into a mold created from that form. The term is also used to describe the act of making a cast.
Elaborate, temporary decorative structure under which a coffin is placed during a visitation period or funeral ceremony.
Type of curved sword with a single edge, commonly carried by cavalry units, or those trained to fight on horseback. The cavalry saber was a standard-issue weapon for Union cavalry troops during the Civil War, but used less often by Confederates. The usefulness of cavalry sabers had decreased as new innovations in modern rifles developed, however, and cavalrymen carried them more for decorative or intimidation purposes than for actual fighting.
Chappel, Alonzo (1828–1887):
American illustrator and painter of portraits, landscapes, and historical scenes. Chappel briefly studied at the National Academy of Design in New York. Focusing on portrait painting early in his career, Chappel became famous for providing illustrations for books about American and European history. Many of his illustrations included important events and people in American History through the Civil War. During and after the Civil War, Chappel painted Civil War battle scenes and leaders, like President Lincoln.
Church, Frederic Edwin (1826-1900):
American landscape painter who studied under Thomas Cole, the founder of the Hudson River School of painting. Elected to the National Academy of design at age twenty-two, Church began his career by painting large, romantic landscapes featuring New England and the Hudson River. Influenced by scientific writings and art theory, Church became an explorer who used his drawings and sketches as a basis for studio paintings. Church traveled to South America, the Arctic Circle, Europe, Jamaica, and the Middle East. Church had an international reputation as America’s foremost landscape painter.
A person who is a citizen and not a member of a branch of the military.
Civil Rights Movement:
Civil rights are literally “the rights citizens enjoy by law.” The modern United States Civil Rights Movement occurred between 1954 and 1968 and sought to achieve the equal rights African Americans had been denied after the Civil War. Organized efforts like voter drives and the use of non-violent techniques to desegregate public space helped to draw national attention to the injustice of segregation, which was particularly widespread in the South. These efforts led to new laws that ensured equal voting rights for African Americans and banned discrimination based on race, color, religion, or national origin.
Ideas, objects, or forms that are often associated with ancient Greece and Rome; but the term can be applied to the achievements of other cultures as well. The term also refers to established models considered to have lasting significance and value or that conform to established standards.
Colman, Samuel Jr. (1832-1920):
American landscape painter influenced by the Hudson River school, America’s first native landscape painting movement. In his early career, Colman studied at the National Academy of Design and painted scenes of New England. Colman became a master of the newly popular technique of watercolor painting. After the Civil War, Colman had a diverse career: painting the American West, Europe, and North Africa, learning to create etchings, and working in design. In addition to watercolor, Colman worked increasingly in drawing and pastel. Later in life, Colman wrote and published essays on art and worked to place his collections in various museums.
Movement led by the American Colonization Society (A.C.S.), which was founded in 1816. In the antebellum period, the movement sought to gradually end slavery and relocate freed African Americans outside of the United States. Members were mainly white people who were opposed to slavery but doubted that the races could live peacefully together. Some African Americans joined the colonizationists, mostly because they feared being ill-treated in the United States. In 1822, the A.C.S. created the West African colony of Liberia to receive freed slaves. Abolitionists opposed colonization as immoral, insisting that the government should end slavery immediately and acknowledge equal rights for African Americans.
Act of placing an order for something, such as a work of art. An individual or group can commission a work of art, often with a portion of the payment made to the artist in advance of its completion (for the purchase of supplies, etc.). Public monuments and painted portraits are usually commissioned, for example. The term also refers to the act of placing an order for (commissioning) a work of art.
Member of the military who holds a commission, or rank. In the Union army, the commissioned ranks included first and second lieutenant, captain, major, lieutenant colonel, colonel, brigadier general, major general, and lieutenant general. In the Confederate army, the ranks were the same except that there was only one form of general. The officer received this commission and authority directly from the government. A non-commissioned officer refers to an enlisted member of the military who has been delegated authority by a commissioned officer. Non-commissioned officers in both armies included sergeant, corporal, and the lowest rank: private.
Way in which the elements (such as lines, colors, and shapes) in a work of art are arranged.
Compromise of 1850:
Series of five bills passed by Congress in 1850 intended to solve a national crisis over whether slavery should expand into the West. It brought California into the Union as a free state, organized the New Mexico and Utah territories under popular sovereignty, banned the slave trade (but not slavery) in Washington, D.C., created a stronger fugitive slave law, and settled the boundaries of Texas. While this compromise was thought to be a final solution to the dispute over slavery in the American territories, it lasted only a short time as the same issues arose again with the organization of the Kansas and Nebraska Territories in 1854.
Confederate States of America (C.S.A.):
Government of eleven slave states that seceded from the United States of America. The first six member states (South Carolina, Georgia, Florida, Alabama, Mississippi, and Louisiana) founded the Confederacy on February 4, 1861. Texas joined very shortly thereafter. Jefferson Davis of Mississippi was its president. When Confederate forces fired upon Union troops stationed at Fort Sumter on April 12–13, 1861, President Abraham Lincoln called for seventy-five thousand militia men to put down what he referred to as an “insurrection.” At that point, four additional states—North Carolina, Virginia, Tennessee, and Arkansas—also seceded in protest of the Union’s coercive measures.
Political party organized during the presidential campaign of 1860 in response to the Democratic Party’s split into Southern and Northern factions. Members mostly came from the border slave states; they were hostile to free soil ideas, but equally uncomfortable with the secessionist ideas of the radical Southern wing of the Democratic Party. They adopted a moderate, vague platform that emphasized the need to preserve the Union and the Constitution. They nominated John Bell of Kentucky to run for president in the 1860 election, but only gained electoral votes in Tennessee, Kentucky, and Virginia. The party dissolved shortly afterward.
An edge or outline in a work of art.
Term used by the Union army to describe runaway slaves who came under the army’s protection. It was coined by General Benjamin Butler, who in 1861 refused the request of Confederate slaveholders to return slaves who had run away to Union military lines. Before the war, law dictated that runaways had to be surrendered to their owners upon claim, but Butler argued that slaves were like any other enemy property and could be confiscated as “contraband” according to the laws of war. Butler was no abolitionist, but his policy was the first official attempt to weaken slavery in the South.
Temporary shelters run by the Union army throughout the occupied South and free states where refugee slaves (including the families of black soldiers) sought protection, food, and work.
Cope, George (1855–1929):
American landscape and trompe l’oeil painter. Cope was trained as a landscape painter, but later transitioned to trompe l’oeil painting, producing highly realistic still-lifes inspired by his passion for the outdoors and hunting. Cope spent most of his life and career in the Brandywine River Valley of Pennsylvania, though traveled as far as the Pacific Northwest.
Copley, John M. (active, 19th century):
American author of the 1893 book A Sketch of the Battle of Franklin, Tenn.; with Reminiscences of Camp Douglas. Copley was a Confederate member of the 49th Tennessee Infantry.
Cash crop of the antebellum South that was produced almost entirely by slave labor. Before 1800, the South’s large farmers (planters) grew long-staple cotton, which was relatively cheap to clean by hand before sale. But long-staple cotton would only grow in coastal regions. With the invention of the cotton gin in 1796, planters throughout the South began planting short staple cotton. The gin cleaned seeds from short staple cotton—which was expensive to clean by hand but grew in virtually any climate in the South. The gin thus prompted the spread of cotton and slavery westward, making the planter class enormously wealthy and influential.
War fought from 1853 to 1856 between Russia and the combined forces of the Ottoman Empire, England, France, and Sardinia. The war ended Russia’s dominance in Southeastern Europe. It was incredibly bloody, resulting in some five-hundred thousand deaths due to battle, disease, and exposure. Many aspects of this conflict anticipated the American Civil War, including the use of the telegraph and railroad to facilitate military movements, the use of rifled muskets, the advent of iron-clad ships, the daily reporting of newspaper correspondents from the scenes of battle, and (though to a smaller degree), the use of photography to document warfare.
Crowe, Eyre (1824–1910):
British painter and writer, known for genre scenes (paintings of everyday life) and historical subjects. Crowe studied in Paris. While working for British author William Makepeace Thackeray, Crowe visited the United States in 1852–1853. His visits to Richmond, Virginia in 1853 and 1856 inspired his paintings showing the brutal reality of slavery in America.
Currier and Ives (1857-1907):
New York firm started by Nathaniel Currier and James Ives, later carried on by their sons. Specializing in affordable, hand-colored prints called lithographs, Currier and Ives employed numerous artists over the firm’s fifty-year history. Its prints covered thousands of different subjects, including famous people, famous events, landscapes, humor, and sports. These images appealed to the interests and feelings of middle-class Americans and were purchased by people all over the country. During the Civil War, Currier and Ives produced images about recent events, bringing images of the war into Americans’ homes.
Curry, John Steuart (1897-1946):
American artist who created paintings, prints, drawings, and murals, that portrayed the American rural heartland as a wellspring of national identity. A Kansas native, Curry studied at the Art Institute of Chicago before focusing on several decorative mural commissions and Kansas scenes, including a large mural depicting John Brown at the Kansas statehouse. Curry's designs proved controversial because they included what many Kansans regarded as unflattering depictions of their state. Although honored in his later years, the furor over the murals is said to have hastened Curry's death from a heart attack, at the age of forty-eight.
Early type of photograph invented by the Frenchman Louis-Jacques-Mandé Daguerre (1787–1851). Each image is one-of-a-kind and made on a polished silver-coated metal plate. Daguerreotypes were often called “the mirror with a memory” because their surface is so reflective. For protection, daguerreotypes were packaged behind glass inside a decorative case. Shortly after daguerreotypes were made public by the French government in 1836, they were introduced in America. They were wildly popular in the 1840s and 1850s since they were more affordable than having a portrait painted.
Darley, Felix Octavius Carr (1822–1888):
American illustrator of magazines and books. Darley began his career in 1842 in Philadelphia. He also worked in New York City and Delaware. Darley became one of the most popular book illustrators in America after 1848, when he created illustrations that became engravings used in books by Washington Irving, James Fenimore Cooper, Nathaniel Hawthorne, Harriet Beecher Stowe, and Edgar Allen Poe. Darley’s images of American icons like pilgrims, pioneers, and soldiers were in high demand before, during, and after the Civil War.
Darling, Aaron E. (active, 19th century):
Artist who painted the Chicago abolitionist couple John and Mary Jones in c.1865.
Davis, Jefferson F. (1808–1889):
Democratic politician and Mexican War veteran who served as U.S. Senator and Secretary of War before becoming President of the Confederacy in 1861. Davis was born in Kentucky and educated at West Point; he served briefly in the U.S. Army before becoming a cotton planter in Mississippi. Though a strong supporter of slavery and slaveholders’ rights, he opposed secession. Nonetheless, when Mississippi seceded, he left the Senate to serve in the Confederate army. To his dismay, he was elected president of the Confederate constitutional convention. After the war, he was indicted for treason and imprisoned, but never put on trial.
Embellishment or ornament meant to make something pleasing. The term also refers to an honor or commemoration.
Individual features, or a small portion of a larger whole.
Geographic region of the Southern United States including South Carolina, Georgia, Alabama, Mississippi, Louisiana, Florida, and Texas, also known as the Lower South or Deep South. These states had the highest slave populations in the South and their economies were heavily reliant on cotton cultivation (as well as sugar and rice). During the Civil War, each of the states seceded from the Union prior to the bombardment of Fort Sumter (April 12–13, 1861).
System of government through which citizens elect their rulers, based on ancient Greek philosophy and practice. The United States is a representative (or indirect) democracy, meaning that eligible adult citizens elect politicians to make decisions on their behalf. Democratic principles are based on the idea that political power lies with the people, but many democratic systems have historically limited the right to vote. In the United States during the Civil War, for instance, only white men could vote.
Party of opposition during the Civil War. Democrats believed in states’ rights, a strict interpretation of the United States Constitution, and a small federal government. Before the war, the party supported popular sovereignty in the Western territories. Southern Democrats abandoned the national party during the election season of 1860. During the secession crisis, Northern Democrats sought to restore the Union through compromise rather than military force, but the Confederacy rejected these attempts. After the attack at Fort Sumter (April 12–13, 1861), many Northern Democrats supported war on the Confederacy, but others opposed it, the draft, and emancipation.
Douglas, Stephen A. (1813–1861):
Democratic lawyer and politician from Illinois who served in the state legislature before his election to the U.S. Senate in 1847. As a Democratic leader, Douglas championed the policy of popular sovereignty (in which territories decided their slaveholding or free status). He is well known for his debates with Abraham Lincoln, his Republican challenger for the Senate in 1858. Though he won that election, Douglas lost to his rival in the presidential election of 1860. After the war began, he supported Lincoln and urged his party to follow suit. Two months later, he died from typhoid fever in Chicago.
Douglass, Frederick (1818–1895):
Former slave, author, and publisher who campaigned for the abolition of slavery. Douglass published his autobiography, Narrative of the Life of Frederick Douglass, an American Slave, Written By Himself, in 1845. Mentored by anti-slavery leader William Lloyd Garrison, Douglass developed his own philosophy of abolition, arguing that the Constitution could "be wielded in behalf of emancipation.” His newspapers, The North Star and Frederick Douglass’s Paper, led abolitionist thought in the antebellum period. He met with Abraham Lincoln during the Civil War and recruited Northern blacks for the Union Army. After the war, he continued fighting for African American civil rights.
Dred Scott v. Sanford:
Supreme Court decision of 1857 that declared that Dred Scott (and all African Americans) were not citizens of the United States and did not have rights as such. Dred Scott was the slave of an army surgeon named Dr. Emerson who had traveled with Scott to free states and territories. After Emerson’s death in 1846, Scott sued Emerson’s heirs claiming that his time in free areas made him a free man. The case was appealed to the United States Supreme Court, which ruled that neither federal nor territorial governments could outlaw slavery in the territories, therefore making free soil and popular sovereignty unconstitutional.
Election of 1860:
Historic presidential election. Four men ran in the race: Abraham Lincoln of Illinois for the Republican Party, Stephen Douglas of Illinois for the Democratic Party, John C. Breckinridge of Kentucky for the Southern Rights Democratic Party, and John Bell of Kentucky for the Constitutional Unionists party. Abraham Lincoln won the election by a majority of the Electoral College, but without a majority of the overall popular vote. All of his support came from free states. Breckenridge dominated the Deep South states, Bell gained limited support in the border slave states, and Douglas was overwhelmingly defeated throughout the country.
Procedure established by the Constitutional Convention of 1787 whereby the states elect the President of the United States. It was a compromise between those who advocated election of the president by Congress and those who wanted election by popular vote. In the Electoral College, every state gets one vote for each of their senators (always two) and representatives in Congress (a minimum of one, with additional representatives determined by the size of a state’s population). In the Election of 1860, Abraham Lincoln won the presidency with 180 electoral votes, but did not receive a majority of the popular vote.
Ellsbury, George H. (1840–1900):
American artist and lithographer. Ellsbury worked for Harper’s Weekly as a sketch artist during the Civil War. He also created city views of the American Midwest between 1866 and 1874, before moving to Minnesota and the western territories.
Freeing a person from the controlling influence of another person, or from legal, social, or political restrictions. In the United States, it is often used to refer specifically to the abolition of slavery.
Executive order issued by President Abraham Lincoln on September 22, 1862, stating that as of January 1, 1863, "all persons held as slaves" within the rebellious southern states (those that had seceded) "are, and henceforward shall be free." The Emancipation Proclamation applied only to the rebelling Confederacy, leaving slavery legal in the Border States and parts of the Confederacy under Union control. Nonetheless, slaves who were able to flee Confederate territory were guaranteed freedom under Union protection. While the order did not end slavery, it added moral force to the Union cause and allowed African American men to join the Union armies.
Printmaking technique where the artist uses a tool called a burin to create lines in a wood or metal surface. After the design is drawn, the plate is inked and the image is transferred under pressure from the woodblock or metal plate to paper.
Visual and documentary materials—pamphlets, ribbons, buttons, printed matter—that are generally not intended to last. Items produced for political campaigns—including Abraham Lincoln’s—are often considered to be ephemera. As historical material, ephemera are very valuable because they help us understand what audiences in the past saw and used.
Relating to horses. Equestrian portraits of Civil War officers show seated, uniformed figures sitting on active or athletic-looking horses. This kind of image is often seen in art history; kings and emperors were often shown this way to suggest their power as leaders.
Printmaking technique where the artist coats a metal plate in wax, and then removes wax from parts of the plate to create the design. Acid is then applied to the plate. This acid acts on the metal to create a permanent design. The plate is inked and the design is transferred under pressure from the plate to paper.
In photography, the amount of time that the shutter of the camera is open, determining how much light enters into the camera and falls on the light-sensitive surface (like a metal or glass plate or film in pre-digital photography). The surface is then processed to create a photograph. During the Civil War, photography was still new and exposure times needed to be longer to get a visible image. This made it difficult to take pictures of action, such as battle, because the subjects had to be still for the entire time the shutter was open.
Fassett, Cornelia Adele (1831–1898):
Portraitist who worked in Chicago and Washington, D.C., Fassett worked with her husband, photographer Samuel M. Fassett, and painted portraits of prominent Illinois men, including Abraham Lincoln in 1860. She moved to Washington, D.C. in 1875 where she received many political commissions, including portraits of Ulysses S. Grant, Rutherford B. Hayes, and James Garfield. Fassett is known for these portraits as well as her painting The Florida Case before the Electoral Commission of 1879 in the United States Senate art collection and features roughly 260 Washington politicians.
Fassett, Samuel (active, 1855–1875):
American photographer active before, during, and after the Civil War. Fassett worked in Chicago and Washington, D.C. In Washington, he was a photographer to the Supervising Architect of the Treasury. Fassett is best known for taking one of the earliest photographs of Abraham Lincoln before he became president. He was married to American painter Cornelia Adele Fassett, who painted a portrait of Lincoln after her husband’s image.
Firestone, Shirley (active, 20th century):
Painter who depicted Harriet Tubman in 1964.
Forbes, Edwin (1839–1895):
Illustrator and artist. Forbes produced images for Frank Leslie’s Illustrated Newspaper from 1861–1865 and traveled as a sketch artist with the Army of the Potomac, covering events of the war. He depicted scenes of everyday life as well as battle scenes, such as the Second Battle of Bull Run and Hooker’s Charge on Antietam. Forbes went on to produce many etchings and paintings from his Brooklyn studio, inspired by his war-time images.
In artworks that portray scenes or spaces, the foreground is the area, usually at the bottom of the picture, which appears closest to the viewer. The background is the area that appears farthest away and is higher up on the picture plane.
Infantry soldier with a military rank during the Civil War who fought on foot. Foot soldiers carried different types of swords and weapons than did cavalry soldiers (who fought on horseback) during the war, since they were trained to fight in different situations.
Fort in the harbor of Charleston, South Carolina that was the site of the first military action in the Civil War. The fort was bombarded by the newly formed Confederacy between April 12 and 13, 1861. On April 14, Major Robert Anderson lowered the American flag and surrendered the fort. This event led to widespread support for war in both the North and the South. Following the battle, Lincoln called for seventy-five thousand men to enlist in the armed services to help suppress the rebellion, which led four more states to join the Confederacy.
Factory that produces cast goods by pouring molten metal (such as iron, aluminum, or bronze) into a mold. A foundry is needed to produce goods like bronze sculptures or artillery, such as cannons.
Frank Leslie’s Illustrated Newspaper:
Popular publication during the Civil War that featured fiction, news, and illustrations of battlefield life. Frank Leslie is the pseudonym (fake name) adopted by English illustrator and newspaper editor Henry Carter. Carter worked for the Illustrated London News and circus man P. T. Barnum before moving to America and founding his first publication using the name Frank Leslie. After the war, Leslie married Miriam Follin, a writer who worked for his paper. Following Leslie’s death, Miriam changed her name to “Frank Leslie” and took over as editor. A paper with the name Frank Leslie on its masthead was in publication from 1852–1922.
Philosophy that stressed economic opportunity and a man’s ability to move across social class and geographic boundaries. Those who believed in free labor thought that man should be free to earn the fruit of his own labor, gain independence, and prosper within a democratic society. Most free labor thinkers opposed slavery to some extent, and the idea itself was central to both the Free Soil movement and the Republican Party.
Type of anti-slavery political philosophy that declared that western territories of the United States should be free of slavery. Unlike abolitionists, many white “free soilers” were unconcerned with Southern slaves. Instead, they feared slavery’s impact on white workers, believing that the system of slavery made it harder for free workers to compete. Some free soilers were also racist and opposed living near African Americans. Others, like Abraham Lincoln, opposed slavery on moral grounds, but believed that Congress could not end slavery where it already existed and could only limit it in states where it had not yet been established.
French, Daniel Chester (1850–1931):
Leading American monumental sculptor. French studied for two years in Italy before returning to the United States to open studios in Boston and Washington, D.C. He earned commissions for portraiture and public monuments, where he combined classical symbolism with realism in his sculptures. French is perhaps best known for the massive seated Lincoln at the Lincoln Memorial on the National Mall in Washington, D.C. (1911–1922).
Fugitive Slave Act:
Part of the Compromise of 1850 that enhanced the Constitution’s 1787 fugitive slave clause by creating a system of federal enforcement to manage slaveholder claims on runaway slaves. Before the war, such claims were handled by state officials, and many free states passed personal liberty laws to protect free blacks from being falsely claimed as runaways; these laws, however, also helped abolitionists hide actual fugitive slaves. The new act put federal marshals in charge of runaway slave claims in an attempt to override state laws. Nonetheless, many free states refused to help implement the Act, making it difficult to enforce.
Furan, R. (active, 20th century):
Painter who depicted Harriet Tubman in 1963.
Gardner, Alexander (1821–1882):
Scottish-American scientist and photographer who worked with photographer Mathew Brady. Gardner served as the manager of Brady’s Washington, D.C. gallery until the outbreak of the Civil War. Gardner produced published more than 3000 images from the war, taken by himself and others he hired to help him. One hundred of these appear in the landmark publication Gardner’s Photographic Sketch Book of the War. The collection, however, was a commercial failure. After the war Gardner traveled to the West and continued photographing.
Garrison, William Lloyd (1805–1879):
Abolitionist and publisher who founded the anti-slavery newspaper The Liberator in 1831. Garrison rejected colonization and believed that African Americans were equals of white citizens and should granted political rights in American society. He co-founded the Anti-Slavery Society and in 1854 publicly burned copies of the U.S. Constitution and the Fugitive Slave Act because they protected slavery. During the Civil War he supported the Union, but criticized President Lincoln for not making abolition the main objective of the war. After the Civil War and the passage of the 13th Amendment banning slavery, Garrison fought for temperance and women’s suffrage.
Refers to the type of subject matter being depicted. Landscapes, still lifes, and portraits are different genres in art. “Genre” can also specifically refer to art that depicts scenes of everyday life.
Gifford, Sanford Robinson (1823-1880):
American landscape painter and native of Hudson, New York. Influenced by Thomas Cole, founder of the Hudson River School of painting, Gifford studied at the National Academy of Design, but taught himself to paint landscapes by studying Cole’s paintings and by sketching mountain scenes. He developed an individual style by making natural light the main subject of his paintings. Gifford traveled widely throughout his career, painting scenes from Europe, the Near East, the American West, the Canadian Pacific region, and Alaska. Gifford also served in the Union army, although his art makes few references to his experience of the war.
Opaque paint similar to watercolor. Gouache is made by grinding pigments in water and then adding a gum or resin to bind it together. The paint has a matte finish.
Graff, J. (active, 19th century):
Painter who depicted the Chicago Zouaves, a famous Civil War drill team, during their visit to Utica, New York.
Grand Army of the Republic (G.A.R.):
An organization for honorably discharged veterans of the Union army founded in Illinois in 1866. Its hundreds of thousands of members helped needy and disabled veterans, lobbied for the passage of pension laws and government benefits for veterans, encouraged friendship between veterans, and promoted public allegiance to the United States Government; it also served as a grass roots organizing arm of the Republican Party. The G.A.R. helped make Decoration Day (Memorial Day) a national holiday and was responsible for making the pledge of allegiance a part of the school day.
Grant, Ulysses S. (1822–1885):
Union military leader during the Civil War. Grant attended West Point and fought in the Mexican-American War prior to his Civil War service. After fighting in the Mississippi Valley and winning victories at Shiloh and Vicksburg, Grant moved to the East to act as General in Chief of the United States Army in March 1864. His relentless campaign ground down Robert E. Lee’s Army of Northern Virginia for the next year, culminating in Lee’s surrender to Grant at Appomattox Court House, Virginia, on April 9, 1865. He was later elected eighteenth President of the United States from 1869 to 1877.
Picture that features more than one person and communicates something about them. Because it was important to include certain people in a group portrait, artists and publishers sometimes added individuals who hadn’t actually posed for the artist, or left out some of those who did.
Great Seal of the United States (also called the Seal of the United States):
National coat of arms for the United States. The design, created on June 20, 1782, portrays a bald eagle holding a shield representing the original thirteen states. The blue band above represents Congress and the stars represent the U.S. on the world stage. The Latin language motto E Pluribus Unum means “out of many, one.” The olive branch symbolizes peace; thirteen arrows symbolize war. On the reverse, a pyramid symbolizes strength and duration. Over it is an eye, symbolizing God. There are two other mottoes: Annuit Coeptis, meaning “He [God] has favored our undertakings,” and Novus Ordo Seclorum, meaning “a new order of the ages.”
Site of radical abolitionist John Brown’s October 17, 1859, raid, where he and twenty-two men (white and black) captured a federal armory and arsenal as well as a rifle works. Brown hoped to inspire a slave uprising in the surrounding area, but instead he and most of his men were captured by a local militia led by Robert E. Lee, future General of the Confederate Army of Northern Virginia. Many of the raiders died, and Brown was put on trial and then hanged for his actions. Brown’s fiery statements during his trial were inspirational to Northern abolitionists and outraged Southerners.
Harper’s Weekly (A Journal of Civilization):
Popular Northern, New York-based, illustrated magazine (1857–1916) and important news source about the Civil War. It consisted of news, popular interest stories, illustrations, and war-related features. Harper’s employed illustrators and artists such as Edwin Forbes and Winslow Homer to make images, sometimes while traveling with the Northern armies.
Healy, George P. A. (1813–1894):
American painter of portraits and historic scenes. Healy studied in France and created works for European royalty before he returned to America. Healy was one of the most well-known and popular portrait painters of his time. Between 1855 and 1867, Healy lived in Chicago and painted important political figures like Abraham Lincoln as well as famous authors and musicians. After the Civil War, Healy traveled throughout Europe painting commissions before returning to Chicago in 1892.
Herline, Edward (1825–1902):
German-American lithographer and engraver. Herline was active in Philadelphia starting in the 1850s, working with several print publishers, including Loux & Co. He was known for his artistic skill in creating microscopic details in his views. Herline produced a wide range of lithographs including city views, book illustrations, maps, and images for advertisements.
Hill, A. (active, 19th century):
Lithographer who created images for Ballou’s Magazine, a nineteenth-century periodical published in Boston, Massachusetts.
Hollyer, Samuel (1826–1919):
British-American printmaker who worked in lithography, etching, and engraving. Hollyer studied in London before immigrating to America in 1851. Hollyer worked for book publishers in New York City and was known for portraits, landscapes, and other illustrations before, during, and after the Civil War.
Term used to describe the area of a nation or region at war that is removed from battlegrounds and occupied by civilians. During the Civil War, there were Northern and Southern homefronts.
Homer, Winslow (1836-1910):
American painter and artist of the Civil War period. Homer used his art to document contemporary American outdoor life and to explore humankind’s spiritual and physical relationship to nature. He had been trained in commercial illustration in Boston before the war. During the conflict he was attached to the Union’s Army of the Potomac and made drawings of what he saw. Many of these were published in the popular magazine Harper’s Weekly. After the war, Homer became more interested in painting, using both watercolors and oils. He painted children, farm life, sports, and the sea.
Horton, Berry (1917–1987):
African American artist who worked in Chicago. Horton made figure drawings and painted.
Hudson River School:
Group of American landscape painters in the nineteenth century (about 1825 to the 1870s) who worked to capture the beauty and wonder of the American wilderness and nature as it was disappearing. Many of the painters worked in or around New York’s Hudson River Valley, frequently in the Catskill and Adirondack Mountains, though later generations painted locations outside of America as well. This group is seen as the first uniquely American art movement since their outlook and approach to making art differed from the dominant European artistic traditions.
State of being or conception that is grander or more perfect than in real life. In art, this may mean making a sitter look more beautiful or a leader more powerful. Much art and literature, especially before 1900, tended to idealize its subjects.
Combination of newspaper and illustrated magazine (such as Harper’s Weekly, Leslie’s Illustrated News, etc.) that appeared in the United States in the 1850s. In an era before television and the internet, these offered a very visual experience of current events. The technology did not exist to publish photographs in such publications at the time. Instead, a drawing was made from a photograph, and then a print was made from the drawing. This was how images based on photographs appeared. Publications also hired sketch artists to go out into the field; their drawings were also turned into illustrations.
Immke, H. W. (1839–1928):
Illinois-based photographer. Immke emigrated from Germany to Peru, Illinois, in 1855 where he studied farming before moving to Chicago in 1866. There, he worked with Samuel M. Fassett, who had one of the best equipped photography studios of the Civil War era. Immke established his own studio in Princeton, Illinois, later that year and operated a very successful business through 1923. He specialized in portraits, with over four hundred images of early Bureau County Illinois settlers in his collection; he also produced landscapes and genre scenes (portrayals of daily life).
Movement towards an economy dominated by manufacturing rather than agriculture. An industrial economy relies on a factory system, large-scale machine-based production of goods, and greater specialization of labor. Industrialization changed the American landscape, leading to artistic and cultural responses like the Hudson River School of painting and the development of parks in urban areas—an interest in nature that was seen as disappearing. By the mid-nineteenth century, the northern United States had undergone much more industrialization than had the South, a factor that contributed to the Union victory over the Confederacy during the Civil War.
Military unit of soldiers who are armed and trained to fight on foot.
Jewett, William S. (1821-1873):
American painter who focused on portraits, landscapes, and genre paintings (scenes of everyday life). He studied at New York City’s prestigious National Academy of Design before being drawn to California by the promise of wealth during the Gold Rush. Although his mining career failed, Jewett discovered that his artistic talents were in high demand among California’s newly rich, who prized his status as an established New York painter. Jewett became one of California’s leading artists.
Kansas-Nebraska Act of 1854:
Law that declared that popular sovereignty, rather than the Missouri Compromise line of 36° 30´ latitude, would determine whether Kansas and Nebraska would be free or slave states. (Popular sovereignty meant that residents of each territory should decide whether slavery would be permitted, rather than the federal government.) After the bill passed, pro-slavery settlers in Kansas fought anti-slavery settlers in a series of violent clashes where approximately fifty people died. This era in Kansas history is sometimes referred to as “Bleeding Kansas” or the “Border War.” Kansas was admitted to the Union as a free state in 1861.
Keck, Charles (1875–1951):
American sculptor known for his realistic style. Born in New York City, and a student of the American Academy of Design, Keck apprenticed under celebrated sculptor Augustus Saint-Gaudens before becoming his assistant. Keck’s gift for realistic depiction is seen in his 1945 bronze sculpture The Young Lincoln.
Traditional wool cap worn by Civil War foot soldiers. It had a short visor and a low, flat crown. Both the Union and Confederate armies wore kepis, but Union soldiers wore blue and Confederates wore grey.
Kurz, Louis (1835–1921) and Kurz & Allison (1878–1921):
Austrian-born lithographer and mural painter who primarily worked in Chicago after immigrating to America in 1848. Kurz was known for his book Chicago Illustrated, a series of lithographs featuring views of the city and its buildings. After 1878 Kurz became a partner in an art publishing firm with Alexander Allison. Their company, Kurz & Allison, created chromolithographs (color-printed lithographs) on a variety of subjects, including Abraham Lincoln and the Civil War. The firm continued until Kurz’s death in 1921.
An outdoor space, or view of an outdoor space. Landscapes in art are often more than just neutral portrayals of the land. They can reflect ideas, attitudes, and beliefs, and may even refer to well known stories from the past. Landscapes are also the settings for myths, biblical stories, and historical events. At the time of the Civil War, landscape paintings were often used to communicate ideas about American expansion, patriotism, and other ideas relevant to the time.
Law, William Thomas (active, 19th century):
Painter who depicted the 1860 Republican National Convention in Chicago.
Lawrence, Martin M. (1808–1859):
American photographer who had a studio in New York. Lawrence trained as a jeweler, but began to make daguerreotypes (an early type of photograph) in the early 1840s. He was well-regarded amongst his peers for his commitment to experimenting with new techniques in early photography. He was profiled in the new publication The Photographic Art Journal in 1851 as a leader in his field.
Lee, Robert E. (1807–1870):
Confederate military leader during the Civil War. Lee graduated second in his class from West Point in 1829 and served in the U.S. Army until the secession of his home state of Virginia in 1861. Lee then resigned from the U.S. Army to join the Confederate cause. In May 1862, Lee took command of the Confederacy’s Army of Northern Virginia. He won victories at Manassas and Chancellorsville, and eventually became General in Chief of all Confederate armies on February 6, 1865. Lee surrendered to Union General Ulysses S. Grant on April 9, 1865, effectively ending the Civil War.
Cast or model of a person’s face and/or hands made directly from that person’s body. A life mask is made from a living subject and a “death mask” from the face of a deceased person. Typically grease is applied to the face or hands, which are then covered with plaster that hardens to form a mold. Abraham Lincoln was the subject of two life masks. Sculptors often made or used these to aid them in creating portraits. Sometimes the masks were used to make metal or plaster casts.
Lincoln, Abraham (1809–1865):
Sixteenth President of the United States. Lincoln was an Illinois lawyer and politician before serving as a U.S. Representative from 1848 to 1850. He lost the 1858 election for U.S. Senate to Democrat Stephen Douglas, but their debates gave Lincoln a national reputation. In 1860, Lincoln won the Presidency, a victory that Southern radicals used as justification for secession. Lincoln’s Emancipation Proclamation went into effect on January 1, 1863, which led to the eventual abolition of slavery. Re-elected in 1864, Lincoln was assassinated by John Wilkes Booth shortly after the war’s end.
Type of print made using a process of “drawing upon stone,” where a lithographer creates an image on a polished stone with a greasy crayon or pencil. The image is prepared by a chemical process so that the grease contained in it becomes permanently fixed to the stone. The stone is sponged with water, and printer’s ink, containing oils, is rolled over the surface. Because oil and water repel each other, the ink remains in areas with grease. The image is then transferred to paper using a special press. Chromolithography, a multicolored printing process, uses a different stone for each color of ink.
Loux & Co.:
Philadelphia lithography firm, active in the nineteenth century, specialized in maps and views of cities. Loux & Co. worked with artists like Edward Herline.
Lussier, Louis O. (1832–1884):
Canadian-American portrait painter. Lussier Studied in San Francisco and worked in California with partner Andrew P. Hill before relocating to Illinois after the Civil War.
March to the Sea:
Military campaign (also known as the Savannah Campaign) led by Union General William Tecumseh Sherman between November 15 and December 21, 1864. Sherman marched with 62,000 Union soldiers between Atlanta and Savannah, Georgia, confiscating or destroying much of the Southern civilian property in their path. This march is an early example of modern “total war,” as it strove to destroy both the Confederacy’s civilian morale and its ability to re-supply itself.
Martyl (Suzanne Schweig Langsdorf) (1918-2013):
American painter, print maker, muralist, and lithographer who trained in art history and archaeology. Langsdorf studied at Washington University in St. Louis. She was given her art signature name, “Martyl,” by her mother, who was also an artist. Martyl painted landscapes and still lifes in both the abstract and realist tradition. She taught art at the University of Chicago from 1965 to 1970.
Person who suffers, makes great sacrifices, or is killed while standing for his or her beliefs.
Mayer, Constant (1832–1911):
French-born genre (everyday scenes) and portrait painter. Mayer studied at the prestigious École des Beaux-Arts in Paris before immigrating to America. Mayer’s works were popular in the States and abroad. Generals Ulysses S. Grant and Philip Sheridan are among the noteworthy individuals who had their portraits painted by Mayer.
The material or materials an artwork is made of, such as oil paint on canvas or bronze for sculpture. During the Civil War more and more media were becoming available and affordable, including photography and various kinds of prints.
Merritt, Susan Torrey (1826–1879):
Amateur artist from Weymouth, Massachusetts who is noted for her collage painting Antislavery Picnic at Weymouth Landing, Massachusetts.
Military Order of the Loyal Legion of the United States (M.O.L.L.U.S.):
Patriotic organization founded by Philadelphia Union military officers immediately after the assassination of President Abraham Lincoln. M.O.L.L.U.S. was established to defend the Union after the war, as there were rumors following Lincoln’s death of a conspiracy to destroy the federal government through assassination of its leaders. Officers in M.O.L.L.U.S. served as an honor guard at Lincoln’s funeral.
Miller, Samuel J. (1822–1888):
Photographer who created daguerreotypes (an early form of photography) in Akron, Ohio. Miller’s sitters included anti-slavery activist Frederick Douglass.
First major legislative compromise about slavery in the nineteenth century. In 1819, Missouri sought to join the Union as a slave state. Northerners opposed to slavery’s expansion westward tried to force Missouri to adopt an emancipation plan as a condition for admission; Southerners angrily opposed this. A compromise bill was forged in 1820, when Maine was admitted as a free state alongside slaveholding Missouri. In addition, slavery was prohibited from territory located north of the 36° 30’ latitude (except Missouri). The precedent of admitting slave and free states in tandem held until the Compromise of 1850.
In sculpture, the method of adding or shaping material (clay, wax, plaster) to form an artwork. In painting and drawing, modeling is the method of making things look three dimensional by shading their edges, for example.
Moran, Thomas (1837-1926):
Born in England but raised in Philadelphia, Moran was the last of the nineteenth-century American landscape painters known as the Hudson River school. After a brief apprenticeship as an engraver, he studied painting, traveling to England in 1862 and Europe in 1866. In 1872 the United States Congress purchased his painting Grand Canyon of the Yellowstone, a work that resulted from his participation in the first government-sponsored expedition to Yellowstone. Moran’s illustrations helped convince the government to preserve the region as a national park. Over Moran’s long and commercially successful career he painted the American West, Italy, Cuba, Mexico, and New York.
Mount, William Sidney (1807-1868):
American portraitist and America’s first major genre (everyday scene) painter. Mount studied briefly at the National Academy of Design but was mainly self-taught. By drawing his subject matter from daily life, Mount rejected the high-culture demand for grand historical scenes modeled after European examples. Mount’s images were reproduced as engravings and color lithographs based on his paintings—a common practice before the age of photography. These prints popularized his art and encouraged other artists to pursue genre subjects. Hailed by critics of the era as an original American artist, Mount created works that reflect daily life and the politics of his time.
Mulligan, Charles J. (active, 19th and early 20th centuries):
Talented American sculptor who trained under renowned sculptor Lorado Taft. Mulligan studied at the School of the Art Institute of Chicago and later at the prestigious École des Beaux-Arts in Paris. Mulligan also taught at the School of the Art Institute of Chicago before leaving to focus on commissioned work, such as his acclaimed 1903 portrayal of the martyred Lincoln, Lincoln the Orator.
Painting (typically large scale) created directly on a wall or on canvas mounted to a wall.
Myers, Private Albert E. (active, 19th century):
Amateur painter and Union soldier from Pennsylvania. Myers painted an image of Camp Douglas in Chicago (a prison-of-war camp for captured Confederate soldiers, and a training and detention camp for Union soldiers) while he was stationed there during the Civil War.
Toy version of nineteenth-century stage spectacles. They were meant to imitate shows that featured large-scale pictures of famous events or dramatic landscapes. Children looked into the box of the myriopticon and moved knobs to change from one picture to another. The toy often came with posters, tickets, and a booklet from which to read a story to accompany the pictures.
Nall, Gus (active, 20th century):
African American representational and abstract painter. Nall studied at the Art Institute of Chicago, and later taught art. He was active in Chicago in the 1950s and 1960s.
Nast, Thomas (1840–1902):
Popular political cartoonist. Born in Germany, Nast immigrated to America in 1846. He began his career as reportorial artist and freelance illustrator in the years leading up to the Civil War. As an ardent supporter of the Union cause, Nast created many recruitment posters and newspaper promotions for the war effort. He joined Harper’s Weekly in 1862 and quickly gained fame as a political cartoonist and satirist, working to expose corruption in government in the post-Civil War years. Nast died in Ecuador after contracting malaria while serving there as Consul General, as appointed by President Theodore Roosevelt.
Artistic approach in which artists attempt to make their subjects look as they do in the real world. Such artworks are said to be "naturalistic."
New York State Emancipation Act of 1827:
Legislation formally banning slavery in New York State. After the Revolutionary War, New York gradually enacted laws that restricted the growth of slavery. Importing new slaves became illegal in 1810, for example. The 1827 act grew out of legislation passed in 1817 that set July 4, 1827, as the date when the following additional measures for enslaved African Americans would go into effect: those born in New York before July 4, 1799 would be freed immediately; all males born after that date would be freed at the age of 28; and all females would be freed at the age of 25.
Painting made from pigment (color), such as ground minerals, suspended in oil. Oil paintings can have a glowing quality and are admired for their jewel-like colors. They typically require a long time to dry.
Military weapons including anything that is shot out of a gun, such as bullets or cannonballs.
O’Sullivan, Timothy (c.1840–1882):
Photographer who worked with Mathew Brady and Alexander Gardner. O’Sullivan began his career in photography as an apprentice to Mathew Brady. He left Brady’s studio to work independently as a Civil War photographer for two years before joining the studio of Alexander Gardner, whom he helped to provide images for Gardner’s Photographic Sketch Book of the War. After the war, O’Sullivan accompanied and made photographs for many government geographical surveys of the United States before being appointed as chief photographer for the United States Treasury in 1880.
P. S. Duval & Son (1837–1879):
Philadelphia lithography firm founded by French-American lithographer Peter S. Duval. Duval was brought to America from France by Cephas G. Childs to work in his Philadelphia firm. Duval was one of America’s most prestigious makers of chromolithographs (lithographs printed in multiple colors). After a fire in 1856, Duval’s son Stephen joined the firm. The firm was famous for being an innovative lithographic leader that printed well-made, colorful city views, historic scenes, and portraits on a variety of subjects.
Created by the repetition of elements (shapes or lines, for example) in a predictable combination.
Philippoteaux, Paul D. (1846–1923):
French painter and artist known for creating cycloramas (massive oil on canvas paintings that were displayed with real props for a three-dimensional effect). Philippoteaux was commissioned to paint a “Battle of Gettysburg” cyclorama in 1882. He created several paintings in the post-Civil War era depicting its battles and military leaders.
An image created by a photographer using a camera. Photography is a scientific and artistic process that uses light to create a permanent image. During the Civil War era, a photographer used a lens to focus light on a light-sensitive surface (like a specially prepared metal or glass plate or film) for a specific length of time. In pre-digital photography, surface was then processed (or "developed") with chemicals to reveal an image. Types of photographs included albumen prints, ambrotypes, daguerreotypes, and tintypes.
Pleasing to look at or resembling art; literally means “like a picture.” In the nineteenth century, the term was also understood to mean an established set of aesthetic ideals that were developed in England and often used in American landscape painting, like those produced by the Hudson River School.
Substance that gives color to paint, ink, or other art media. Oil paints, for example, are made from powdered pigment suspended in oil. Pigments may be made from natural substances, such as minerals and plants, or may be synthetic.
The United States Constitution provides that each state’s citizens be represented in Congress by people they elect. Each state receives two Senators, but in the House the number of representatives varies according to a state’s population, as determined by census every ten years. During the Constitutional Convention of 1787, Southern slaveholding states refused to join the Union unless they could include their slave populations in this calculation. Without this measure, they would have been overwhelmingly outnumbered by free state representatives. After debate, the convention compromised by allowing states to count three-fifths of their slave populations toward representation in the House.
Artwork or building that has many colors.
Temporary floating bridge made by placing small boats called pontoons next to each other. The pontoons are tied together but not to the land, so the bridge can move with the current of the river or stream. During the Civil War, moving the bridge parts over land was done by wagon, and required many men and horses. The Union army became exceptionally skilled at building pontoon bridges, even across the swamps of the Deep South.
Political principle coined by Senator Lewis Cass of Michigan during his 1848 Presidential campaign, and later championed by Senator Stephen Douglas of Illinois. The principle stated that settlers of each territory, not the federal government, should determine whether or not slavery would be permitted there. Popular sovereignty was a compromise to resolve Congressional conflict over whether or not United States territories should be admitted to the Union as free or slave states. Though the Democratic Party endorsed the idea, it was rejected by many northerners in favor of Free Soil ideas, and the pro-slavery South grew increasingly hostile toward it.
Total number of votes directly cast by eligible voters for a candidate in an election. In the United States presidential election system, the popular vote in each state determines which candidate receives that state’s votes in the Electoral College. The Electoral College is a voting body created by the U.S. Constitution that elects the President and Vice President using appointed electors. The number of electors for each state is equal to the state’s number of federal representatives and senators. These electors are obligated to cast their votes for the ticket who won the popular vote in their respective states.
Representation or depiction of a person in two or three dimensions (e.g. a painting or a sculpture). Sometimes an artist will make a portrait of himself or herself (called a self-portrait).
Powers, Hiram (1805–1873):
One of the most influential American sculptors of the nineteenth century. Powers developed a passion for sculpture as a young man while studying in Cincinnati under Prussian artist Frederick Eckstein. Powers began his career doing portrait busts of friends and later politicians. He is best known for The Greek Slave (1843), which was championed as a symbol of morality, especially during its tour of the United States amid rising abolitionist tensions. He spent much of his life within the artistic expatriate community in Florence, Italy, and received many commissions throughout his later career, notably some for the Capitol in Washington, D.C.
Price, Ramon B. (1930–2000):
African American artist and curator. Price was born in Chicago and educated at the School of the Art Institute of Chicago and Indiana University at Bloomington. Mentored by Margaret Burroughs, co-founder of the DuSable Museum of African American History, Price became a painter and a sculptor who focused his career on teaching. Price educated high school and college students before becoming chief curator at the DuSable Museum.
A mechanically reproduced image, usually on paper, but sometimes on another type of surface, like fabric. Printmaking encompasses a range of processes, put prints are generally produced by inking a piece of wood, metal, or polished stone that has a design or drawing on it. Pressure is applied when the inked surface comes into contact with the material being printed on; this transfers the design to the final printed surface.
Proctor, Alexander Phimister (1860–1950):
Painter, etcher, and sculptor known for his unsentimental representations of the American West and his sculptures of historical and symbolic subjects. Proctor began his career as a wood engraver, and later gained international recognition for his 35 sculptures of western animals, commissioned for the World’s Columbian Exhibition in 1893. Throughout his career, his subjects ranged from animals inspired by his frequent hunting trips to political icons, such as General Robert E. Lee and William T. Sherman; he also sculpted figures that represent American ideals, such as the Pioneer Mother.
One who opposes or takes arms against his or her government. During the Civil War, Northerners applied this term to supporters of the Confederacy, particularly to soldiers and armies. Southerners also adopted the name as a badge of honor, associating it with the colonial rebels of the American Revolution.
Act of public resistance—often violent—to a government or ruler. In the Civil War, the North saw the secession of the South as an act of rebellion, while Southerners saw the formation of the Confederacy as within their States’ rights.
Rebisso, Louis T. (1837–1899):
Italian-born sculptor who created monumental works in the United States. Rebisso was forced to leave Italy for political reasons while in his twenties. He immigrated to Boston and later settled in Cincinnati, the city with which he is linked. He worked as professor of sculpture at the Art Academy of Cincinnati. The artist is well known for his bronze Ulysses S. Grant Memorial (1891) in Chicago’s Lincoln Park.
Period after the Civil War during which the Confederacy was reintegrated into the Union between 1865 and 1877. The era was turbulent, as former slaves fought for citizenship rights while white Southerners violently resisted change. By 1877, whites again controlled their states, after which they systematically oppressed black citizens politically and economically.
Renesch, E. G. (active, 20th century):
Creator of patriotic images and recruiting posters around the time of WWI, some of which included Abraham Lincoln and others that showed African-Americans in uniform.
An image or artistic likeness of a person, place, thing, or concept.
Political party formed in 1854 by antislavery former members of the Whig, Free Soil, and Democratic Parties. Republicans ran their first candidate for president in 1856. At that time, they pledged to stop the spread of slavery, maintain the Missouri Compromise, admit Kansas to the Union as a free state, and oppose the Supreme Court’s decision in the Dred Scott case. The party was mainly composed of Northerners and it sought the support of Westerners, farmers, and Eastern manufacturers. Abraham Lincoln ran for president as a Republican and won the election in 1860.
Rogers, John (1829–1904):
Renowned artist who sculpted scenes of everyday life, families, and Civil War soldiers. Rogers primarily made statuettes, referred to as Rogers Groups, which were mass produced as plaster casts and sold to and displayed in households across the country. He also received commissions for several larger-scale pieces, such as a sculpture of General John A. Reynolds in Philadelphia.
Approach or movement in art that stresses strong emotion and imagination. Romanticism was dominant in the arts between about 1780 and 1840, but is also present in art made since then.
Saint-Gaudens, Augustus (1848–1907):
Foremost American sculptor of his era. Saint-Gaudens began his career as apprentice to a stone-cutter at age thirteen. He studied at the college Cooper Union and the National Academy of Design, both in New York. He collaborated with other American painters and architects on several projects, while also creating important independent sculptures and reliefs. Some of his most famous works include his public monuments to President Lincoln and Colonel Robert Gould Shaw. Saint-Gaudens also designed decorative arts, coins and medals, busts, and relief portraits.
Events held in Northern cities during the Civil War to raise money to support Union soldiers. The fairs were organized through the United States Sanitary Commission, formed in response to the Army Medical Bureau’s inability to maintain clean, medically safe environments for soldiers, particularly the wounded. Women played an important role in founding the commission and organizing the fairs. The first event, the Northwestern Soldiers’ Fair, was held in Chicago in October and November 1863. Donated items were exhibited and purchased to benefit the Union military. The atmosphere of these fairs was festive, with lots of displays, vendors, music, and speeches.
Saunders, Harlan K. (1850–c. 1950):
Artist who served in the Civil War, fighting with the 36th Illinois Volunteer Infantry. Saunders painted General John A. Logan after the war.
Art consisting of images carved onto ivory or ivory-like materials. Initially the term referred to art made by American whalers who carved or scratched designs onto the bones or teeth of whales or the tusks of walruses. Much of this art was made during the whaling period (between the 1820s and the 1870s). Seamen often produced their designs using sharp implements and ink or lampblack (produced from soot from oil lamps, for example) wiped into the scratched lines to make the intricate drawings visible.
Three-dimensional work of art. Sculptures can be free-standing or made in relief (raised forms on a background surface). Sometimes, a sculpture is described according to the material from which it is made (e.g., a bronze, a marble, etc.).
To break away from a larger group or union. Secession has been a common feature of the modern political and cultural world (after 1800) when groups of all kinds sought identity and independence. In the context of the Civil War, the Confederacy argued that a state could secede if it believed the federal government failed to meet its Constitutional duties. Because the states had voluntarily entered the federal government, they could likewise exit the Union should they see fit to do so. In 1860–1861, slaveholding states believed that Congress’ failure to protect slavery in the territories justified secession.
Sense of identity specific to a region of the country or group of states. Leading up to the Civil War, sectionalism was caused by the growing awareness that different regions of the country (North and South) had developed distinct economic interests and cultures as a consequence of their forms of labor. Those differences prompted political conflicts over the place of slavery in the country. The most radical brand of sectionalism in the United States led to secession.
Shaw, Robert Gould (1837–1863):
Colonel in the Union Army who led the African American 54th Massachusetts Volunteer Infantry during the Civil War. Shaw was a member of a prominent Boston abolitionist family, and he attended Harvard in the years before the Civil War. Shaw was killed on July 18, 1863 while leading his troops in the Second Battle of Fort Wagner near Charleston, South Carolina, and was buried at the battle site in a mass grave with his soldiers.
Sheridan, Philip (1831–1888):
Union military leader during the Civil War. Sheridan rose quickly through the ranks of the Union Army during the war, becoming a Major General in 1863. In 1864, he became famous for the destruction of the Shenandoah Valley of Virginia, an area rich in resources and foodstuffs needed by the Confederacy. After the war, Sheridan was military governor of Texas and Louisiana before leading military forces against Indian tribes in the Great Plains. Sheridan became Commanding General of the United States Army in 1883 until his death in 1888.
Sherman, William Tecumseh (1820–1891):
Union military leader during the Civil War famous for his “March to the Sea,” a total war campaign through Georgia and South Carolina that severely damaged the Confederacy. Sherman graduated from West Point in 1840 and served in the military until 1853. After careers in banking and military education, he re-joined the U.S. Army as a colonel in 1861. He was promoted to Major General after several successful battles. He accepted the Confederate surrender of all troops in Georgia, Florida, and the Carolinas on April 26, 1865. From 1869 to 1883, Sherman served as Commanding General of the U.S. Army.
Person in a painting, photograph, sculpture, or other work of art who is likely to have posed for the artist. “Sitting for a portrait” means to pose for one.
Drawing or painting that is quickly made and captures the major details of a subject. A sketch is not intended to be a finished work.
Slave Power Conspiracy:
Idea that slaveholders held too much power in the federal government and used that power to limit the freedoms of fellow citizens. In particular, proponents of the idea pointed to the ways that abolitionists were prevented from petitioning against slavery by slavery’s sympathizers in Congress, or that slaveholders had dominated the presidency by virtue of the three-fifths compromise, (of the first fifteen presidents, ten had owned slaves) or unfairly influenced the Supreme Court, as in the Dred Scott Decision of 1856. The idea became central to the Republican Party’s platform, and to Abraham Lincoln’s campaign in 1860.
System in which people are considered property, held against their will, and forced to work. By the Civil War, slavery was fundamental to the economy, culture, and society of the South, and the slave population numbered four million. Under this system, children born to enslaved mothers were also enslaved. Slavery was thought suitable only for people of African descent, both because, historically, the slave trade had been based on kidnapping African peoples, and because most white Americans believed themselves superior to darker skinned peoples. Slaves built the South’s wealth through their uncompensated forced labor, growing cotton and other crops.
Southern Rights Democrats:
Faction of the Democratic Party made up of Southerners who left the national party just before the Election of 1860. This group openly discussed seceding from the Union and ran on a platform that rejected popular sovereignty, demanded legal protection for slavery in the Western territories, and advocated that the United States reopen the slave trade with Africa (which had ended in 1808). In 1860 John Breckinridge ran for president as a Southern Rights Democrat, receiving seventy-two electoral votes all from the Deep South states, and coming in second to Republican winner Abraham Lincoln, who received 180 electoral votes.
Spencer, Lilly Martin (1822-1902):
Born in England but raised in Ohio, Spencer focused on genre paintings of American middle-class home life. Spencer showed talent at a young age and trained with American artists around Cincinnati before moving to New York. She was an honorary member of National Academy of Design, the highest recognition the institution then permitted women. Spencer was active in the art world while also marrying and raising children. Spencer gained fame in Europe and America through her humorous images of domestic life, many of which were reproduced as prints. Spencer continued to paint until her death at the age of eighty.
Type of agricultural product that is in constant demand and is the main raw material produced in a region. Examples of staple crops in the South include cotton, sugar, tobacco, and rice. In the pre-Civil War United States, cotton was the largest export staple crop.
Two nearly identical photographs mounted on a card. When examined through a special viewer (a stereoscope), they give the impression of three-dimensional depth. The principles of stereographic photography were known since the beginning of photography. Stereographic images were made with cameras that had two separate lenses positioned an “eye’s distance” apart. The effect works because, like human eyes, the stereoscope merges two images recorded from slightly different positions into one.
Oversimplified conception, opinion, or belief about a person or group. Stereotypes live on because they are repeated, but they are often cruel and inaccurate. The term also is used for the act of stereotyping a person or group.
Artwork showing objects that are inanimate (don’t move) and arranged in a composition. Still-life paintings often feature common everyday items like food, flowers, or tableware. Sometimes the selection of items is symbolic, representing a person or an idea.
Stowe, Harriet Beecher (1811–1896):
Abolitionist and author of the anti-slavery novel Uncle Tom’s Cabin, published between 1851 and 1852. Stowe was the daughter of Lyman Beecher, preacher and founder of the American Temperance Society. Uncle Tom’s Cabin became a bestseller and enabled Stowe to pursue a full-time career as a writer of novels, short stories, articles, and poems. Stowe used the fame she gained from Uncle Tom’s Cabin to travel through the United States and Europe speaking against slavery.
Stringfellow, Allen (1923–2004):
African American painter and Chicago gallery owner. Stringfellow studied at the University of Illinois and the Art Institute in Milwaukee, Wisconsin. Along with traditional painting, he worked as a printmaker, and in collage and watercolor. Stringfellow was mentored by the African American painter William Sylvester Carter. Many of Stringfellow’s artworks involve images of religion and jazz.
Individual or characteristic manner of presentation or representation. In art, an artist, a culture, or a time period may be associated with a recognizable style.
Something that stands for or represents an idea, quality, or group. The figure of “Uncle Sam” represents the United States, for example. Artists often use symbolism to represent ideas and events in ways that are easy to visualize.
Taft, Lorado (1860–1936):
Sculptor, educator, and writer regarded as one of Chicago’s most renowned native artists. Taft studied at the prestigious École des Beaux-Arts in Paris and returned to Chicago, where he opened a sculpture studio and taught and lectured about sculpture at the School of the Art Institute of Chicago. He also lectured on art history at the University of Chicago, nearby his studio. Taft earned praise for his work commissioned for the Horticultural Building at the World’s Columbian Exhibition in 1893, and soon began making monumental pieces that can be seen across the country.
Tholey, Augustus (birth date unknown–1898):
German-American painter, pastel artist, lithographer, and engraver. Tholey moved to Philadelphia in 1848 where, over the next few decades, he worked for a number of publishing firms. He specialized in military and patriotic portraits.
Type of photograph popular during the Civil War era, sometimes called a “ferrotype.” To make one, a photographic negative is printed on a blackened piece of very thin iron (not tin, incidentally). A negative seen against a black background turns the negative into a positive image, as with an ambrotype, another type of photograph. Tintypes were very popular because they were inexpensive and could be put into photo-albums and sent through the mail, unlike fragile and bulkier daguerreotypes. Many Civil War soldiers had tintypes made of themselves.
Way an artist interprets his or her subject. Also refers to his or her uses of art materials in representing a subject.
Truth, Sojourner (1797–1883):
Former slave and advocate for equality and justice. Born into slavery in New York State as Isabella Baumfree, she walked away from slavery in 1825 after her owner broke his promise to grant her freedom. She took the name Sojourner Truth in 1843, and committed her life to preaching against injustice. Truth worked with abolitionist leader William Lloyd Garrison, who published her biography in 1850. Following its publication, Truth became a popular anti-slavery and women’s rights speaker. After the war, Truth campaigned for the Freedman’s Relief Association and advocated for giving land in the Western territories to freed slaves.
Tubman, Harriet (c.1820–1913):
Former slave, abolitionist, and leader in the women’s suffrage movement. Born enslaved in Maryland, Harriet Tubman escaped slavery by age thirty and traveled to freedom in Philadelphia. She risked her life along the Underground Railroad to make several trips back to the South to lead family members and others out of bondage. Tubman became a supporter of John Brown, and spoke out publically against slavery. During the Civil War, she aided the Union army as a scout and spy in Confederate territory. After the war, Tubman became a leader in the women’s suffrage movement.
Uncle Tom’s Cabin; or, Life Among the Lowly:
Popular anti-slavery novel published in 1852 by the New England abolitionist and writer Harriett Beecher Stowe (1811–1896). It first appeared as installments in an abolitionist magazine before it was published in two parts. Among the most widely read books of the nineteenth century, it was translated into several languages and often performed as a play. Several of its characters and famous scenes were portrayed in art and illustrations during the Civil War period. The illustrator Hammatt Billings (1818–1874) made the well-known engravings that illustrated the book.
Symbolic name for the secret network of people, routes, and hiding places that enabled African American slaves to escape to freedom before and during the Civil War. Although some white Northern abolitionists supported the network, escaping slaves were frequently assisted by fellow African Americans, both Southern slaves and Northern freedmen. Code words were often used to talk about the Underground Railroad: “conductors” such as Harriet Tubman led escaping slaves, or “cargo,” to safe places called “stations.”
Shorthand for the United States federal government. During the Civil War, it became the name most frequently used to describe the states left behind after the Confederacy seceded (though they are also called “the North”). It was made up of eighteen free states, five Border States (those slave states that did not secede), and the western territories.
United States Colored Infantry/Troops (U.S.C.T.):
Branch of the Union Army reserved for black servicemen, as the army did not allow integrated regiments. The majority of the U.S.C.T.’s approximately one hundred seventy-nine thousand soldiers came from slave states, but African American men from all over the United States eagerly joined the Federal Army because they believed Union victory would end slavery. In the free states, for instance, nearly seventy percent of eligible African American men enlisted! As the war progressed, the War Department looked to the South to bolster the ranks, since one of the military necessities driving emancipation was to increase the fighting strength of the federal army.
United States Sanitary Commission (U.S.S.C.):
Civilian organization founded to help improve medical care and sanitary conditions for Union soldiers. The U.S.S.C. raised money and collected goods to provide supplies and medical care to soldiers. It worked with the military to modernize and provide hospital care for the wounded. Members also raised money through public events like Sanitary Fairs, where donated items were exhibited and purchased to benefit the Union military.
Geographic and cultural area of the American South. During the Civil War, it included states that seceded from the Union and joined the Confederacy (Virginia, North Carolina, Tennessee, and Arkansas) and Border States which remained loyal to the Union (Delaware, Maryland, Kentucky, and Missouri). Sometimes referred to as the “Upland South,” the region is distinct from the Lower or Deep South in its geography, agriculture, and culture.
Growth of cities and a movement of populations to cities. Urbanization causes economic and cultural changes that affect people in both urban and rural areas. In the time leading up to and during the Civil War, the North underwent urbanization at a fast rate. This gave the North advantages in the war in terms of both manufacturing and the ability to move people and goods from place to place.
Volk, Leonard Wells (1828–1895):
American sculptor who had a studio in Chicago. Many regard him as the first professional sculptor in this city. Related to Illinois Senator Stephen A. Douglas by marriage, Douglas sponsored Volk’s art education in Europe in the mid 1850s. In 1860 Volk became the first sculptor to make life casts in plaster of President Lincoln’s hands, face, shoulders, and chest. Volk became known for his war monuments, but his casts of Lincoln were frequently used by other artists to create sculptures of the president.
War with Mexico:
War fought between the United States and Mexico (1846–1848). After the U.S. annexed Texas in 1845, President James K. Polk attempted to purchase large swaths of western territory from Mexico. When Mexico refused, the U.S. created a border dispute that it later used as an excuse to declare war. With U.S. victory came five-hundred thousand square miles of new territory, including what would become California, New Mexico, Arizona, and parts of Utah, Colorado, Nevada, and Wyoming. Disagreements over slavery’s place in these territories provoked political tensions that led to the Civil War.
Ward, John Quincy Adams (1830–1910):
American sculptor in bronze, marble, and plaster. Ward studied in New York under local sculptor Henry Kirke Brown before opening his own New York studio in 1861. He enjoyed a very successful career, and was noted for his natural, realistic work. Also an abolitionist, Ward attempted to portray the complexities of emancipation in his popular sculpture The Freedman (1865).
Washington, Jennie Scott (active, 20th century–today):
African American painter who focuses on historical and contemporary subjects. Washington was a protégée of Margaret Burroughs, the artist, writer, and co-founder of the DuSable Museum of African American History. Educated at the American Academy of Art in Chicago and the Art Institute of Chicago, Washington also teaches art. Her public access art program, Jennie's Reflections, has been on the air in Chicago since 1989.
Paint in which the pigment (color) is suspended in water. Most often painted on paper, watercolors were also used to give color to drawings and to black-and-white prints (such as those by Currier and Ives) and sometimes to photographs. They are more portable and faster drying than oil paints. Although watercolor was often associated with amateur or women artists, many well-known Civil War era artists like Winslow Homer, Samuel Colman, and others worked in the medium.
Waud, Alfred R. (1828–1891):
English born illustrator, painter, and photographer who immigrated to America in 1858 and worked as a staff artist for the magazine Harper’s Weekly during and after the Civil War. Waud’s sketches were first-person accounts of the war that reached thousands of readers. After the Civil War, he traveled through the South documenting the Reconstruction. Waud also toured the American West, depicting the frontier, Native Americans, and pioneers.
Wessel, Sophie (1916–1994):
American artist and community activist. A graduate of the School of the Art Institute of Chicago, Wessel was an artist under the Works Progress Administration in the late 1930s, a jobs program that helped artists and other workers weather the Great Depression. Primarily an oil painter, Wessel also worked in drawing, in sculpture, in watercolors, and as a printmaker. Wessel’s art focuses on political and social-justice subjects, like the Civil Rights Movement, rights for women, and the Anti-War Movement. She also taught art at several Chicago-area community centers.
Political party founded in 1833 in opposition to the policies of President Andrew Jackson. Whigs supported a platform of compromise and balance in government as well as federal investments in manufacturing and national transportation improvements. They tended to oppose aggressive territorial expansion programs. The Whig party dissolved in 1856 over division on the issue of whether slavery should expand into the United States’ territories. Many Northern Whigs went on to found the Republican Party.
White, Stanford (1853–1906):
Influential architect of the firm McKim, Mead, and White. White worked with his firm and independently to design several enduring structures such as the Washington Square Arch (1889) and the New York Herald Building (1894). White was murdered by the husband of his former lover in the original Madison Square Garden (a building he had also designed).
Wiest, D. T. (active, 19th century):
Artist who created the image In Memory of Abraham Lincoln: The Reward of the Just after Lincoln’s assassination.
Elite infantry troops and voluntary drill teams that wore showy uniforms—brightly colored jackets and baggy pants—inspired by uniform designs that French soldiers popularized in the 1830s. The French Zouaves had borrowed ideas for their uniforms from Algerian (northern African) soldiers. Zouaves existed in many armies across the world. Civil War Zouaves were often seen in parades, but they served bravely in battle, too. Colonel Elmer E. Elsworth (1837–1861), a personal friend of Abraham Lincoln and the first casualty of the Civil War, led a Zouave unit that was well known in Chicago, Illinois, and across the country. | http://www.civilwarinart.org/glossary | 13 |
15 | A screw thread, often shortened to thread, is a helical structure used to convert between rotational and linear movement or force. A screw thread is a ridge wrapped around a cylinder or cone in the form of a helix, with the former being called a straight thread and the latter called a tapered thread. A screw thread is the essential feature of the screw as a simple machine and also as a fastener. More screw threads are produced each year than any other machine element.
The mechanical advantage of a screw thread depends on its lead, which is the linear distance the screw travels in one revolution. In most applications, the lead of a screw thread is chosen so that friction is sufficient to prevent linear motion being converted to rotary, that is so the screw does not slip even when linear force is applied so long as no external rotational force is present. This characteristic is essential to the vast majority of its uses. The tightening of a fastener's screw thread is comparable to driving a wedge into a gap until it sticks fast through friction and slight plastic deformation.
Screw threads have several applications:
- Gear reduction via worm drives
- Moving objects linearly by converting rotary motion to linear motion, as in the leadscrew of a jack.
- Measuring by correlating linear motion to rotary motion (and simultaneously amplifying it), as in a micrometer.
- Both moving objects linearly and simultaneously measuring the movement, combining the two aforementioned functions, as in a leadscrew of a lathe.
In all of these applications, the screw thread has two main functions:
- It converts rotary motion into linear motion.
- It prevents linear motion without the corresponding rotation.
Every matched pair of threads, external and internal, can be described as male and female. For example, a screw has male threads, while its matching hole (whether in nut or substrate) has female threads. This property is called gender.
The helix of a thread can twist in two possible directions, which is known as handedness. Most threads are oriented so that the threaded item, when seen from a point of view on the axis through the center of the helix, moves away from the viewer when it is turned in a clockwise direction, and moves towards the viewer when it is turned counterclockwise. This is known as a right-handed (RH) thread, because it follows the right hand grip rule. Threads oriented in the opposite direction are known as left-handed (LH).
By common convention, right-handedness is the default handedness for screw threads. Therefore, most threaded parts and fasteners have right-handed threads. Left-handed thread applications include:
- Where the rotation of a shaft would cause a conventional right-handed nut to loosen rather than to tighten due to fretting induced precession. Examples include:
- In combination with right-handed threads in turnbuckles and clamping studs.
- In some gas supply connections to prevent dangerous misconnections, for example in gas welding the flammable gas supply uses left-handed threads.
- In a situation where neither threaded pipe end can be rotated to tighten/loosen the joint, e.g. in traditional heating pipes running through multiple rooms in a building. In such a case, the coupling will have one right-handed and one left-handed thread
- In some instances, for example early ballpoint pens, to provide a "secret" method of disassembly.
- In mechanisms to give a more intuitive action as:
- Some Edison base lamps and fittings (such as formerly on the New York City Subway) have a left-hand thread to deter theft, since they cannot be used in other light fixtures.
The term chirality comes from the Greek word for "hand" and concerns handedness in many other contexts.
The cross-sectional shape of a thread is often called its form or threadform (also spelled thread form). It may be square, triangular, trapezoidal, or other shapes. The terms form and threadform sometimes refer to all design aspects taken together (cross-sectional shape, pitch, and diameters).
Most triangular threadforms are based on an isosceles triangle. These are usually called V-threads or vee-threads because of the shape of the letter V. For 60° V-threads, the isosceles triangle is, more specifically, equilateral. For buttress threads, the triangle is scalene.
The theoretical triangle is usually truncated to varying degrees (that is, the tip of the triangle is cut short). A V-thread in which there is no truncation (or a minuscule amount considered negligible) is called a sharp V-thread. Truncation occurs (and is codified in standards) for practical reasons:
- The thread-cutting or thread-forming tool cannot practically have a perfectly sharp point; at some level of magnification, the point is truncated, even if the truncation is very small.
- Too-small truncation is undesirable anyway, because:
- The cutting or forming tool's edge will break too easily;
- The part or fastener's thread crests will have burrs upon cutting, and will be too susceptible to additional future burring resulting from dents (nicks);
- The roots and crests of mating male and female threads need clearance to ensure that the sloped sides of the V meet properly despite (a) error in pitch diameter and (b) dirt and nick-induced burrs.
- The point of the threadform adds little strength to the thread.
Ball screws, whose male-female pairs involve bearing balls in between, show that other variations of form are possible. Roller screws use conventional thread forms but introduce an interesting twist on the theme.
The angle characteristic of the cross-sectional shape is often called the thread angle. For most V-threads, this is standardized as 60 degrees, but any angle can be used.
Lead, pitch, and starts
Lead (pron.: //) and pitch are closely related concepts.They can be confused because they are the same for most screws. Lead is the distance along the screw's axis that is covered by one complete rotation of the screw (360°). Pitch is the distance from the crest of one thread to the next. Because the vast majority of screw threadforms are single-start threadforms, their lead and pitch are the same. Single-start means that there is only one "ridge" wrapped around the cylinder of the screw's body. Each time that the screw's body rotates one turn (360°), it has advanced axially by the width of one ridge. "Double-start" means that there are two "ridges" wrapped around the cylinder of the screw's body. Each time that the screw's body rotates one turn (360°), it has advanced axially by the width of two ridges. Another way to express this is that lead and pitch are parametrically related, and the parameter that relates them, the number of starts, very often has a value of 1, in which case their relationship becomes equality. In general, lead is equal to S times pitch, in which S is the number of starts.
Whereas metric threads are usually defined by their pitch, that is, how much distance per thread, inch-based standards usually use the reverse logic, that is, how many threads occur per a given distance. Thus inch-based threads are defined in terms of threads per inch (TPI). Pitch and TPI describe the same underlying physical property—merely in different terms. When the inch is used as the unit of measurement for pitch, TPI is the reciprocal of pitch and vice versa. For example, a 1⁄4-20 thread has 20 TPI, which means that its pitch is 1⁄20 inch (0.050 in or 1.27 mm).
As the distance from the crest of one thread to the next, pitch can be compared to the wavelength of a wave. Another wave analogy is that pitch and TPI are inverses of each other in a similar way that period and frequency are inverses of each other.
Coarse versus fine
Coarse threads are those with larger pitch (fewer threads per axial distance), and fine threads are those with smaller pitch (more threads per axial distance). Coarse threads have a larger threadform relative to screw diameter, whereas fine threads have a smaller threadform relative to screw diameter. This distinction is analogous to that between coarse teeth and fine teeth on a saw or file, or between coarse grit and fine grit on sandpaper.
The common V-thread standards (ISO 261 and Unified Thread Standard) include a coarse pitch and a fine pitch for each major diameter. For example, 1⁄2-13 belongs to the UNC series (Unified National Coarse) and 1⁄2-20 belongs to the UNF series (Unified National Fine).
A common misconception among people not familiar with engineering or machining is that the term coarse implies here lower quality and the term fine implies higher quality. The terms when used in reference to screw thread pitch have nothing to do with the tolerances used (degree of precision) or the amount of craftsmanship, quality, or cost. They simply refer to the size of the threads relative to the screw diameter. Coarse threads can be made accurately, or fine threads inaccurately.
There are several relevant diameters for screw threads: major diameter, minor diameter, and pitch diameter.
Major diameter
Major diameter is the largest diameter of the thread. For a male thread, this means "outside diameter", but in careful usage the better term is "major diameter", since the underlying physical property being referred to is independent of the male/female context. On a female thread, the major diameter is not on the "outside". The terms "inside" and "outside" invite confusion, whereas the terms "major" and "minor" are always unambiguous.
Minor diameter
Minor diameter is the smallest diameter of the thread.
Pitch diameter
Pitch diameter (sometimes abbreviated PD) is a diameter in between major and minor. It is the diameter at which each pitch is equally divided between the mating male and female threads. It is important to the fit between male and female threads, because a thread can be cut to various depths in between the major and minor diameters, with the roots and crests of the threadform being variously truncated, but male and female threads will only mate properly if their sloping sides are in contact, and that contact can only happen if the pitch diameters of male and female threads match closely. Another way to think of pitch diameter is "the diameter on which male and female should meet".
Classes of fit
The way in which male and female fit together, including play and friction, is classified (categorized) in thread standards. Achieving a certain class of fit requires the ability to work within tolerance ranges for dimension (size) and surface finish. Defining and achieving classes of fit are important for interchangeability. Classes include 1, 2, 3 (loose to tight); A (external) and B (internal); and various systems such as H and D limits.
Standardization and interchangeability
To achieve a predictably successful mating of male and female threads and assured interchangeability between males and between females, standards for form, size, and finish must exist and be followed. Standardization of threads is discussed below.
Thread depth
Screw threads are almost never made perfectly sharp (no truncation at the crest or root), but instead are truncated, yielding a final thread depth that can be expressed as a fraction of the pitch value. The UTS and ISO standards codify the amount of truncation, including tolerance ranges.
A perfectly sharp 60° V-thread will have a depth of thread ("height" from root to crest) equal to .866 of the pitch. This fact is intrinsic to the geometry of an equilateral triangle—a direct result of the basic trigonometric functions. It is independent of measurement units (inch vs mm). However, UTS and ISO threads are not sharp threads. The major and minor diameters delimit truncations on either side of the sharp V, typically about 1/8p (although the actual geometry definition has more variables than that). This means that a full (100%) UTS or ISO thread has a height of around .65p.
Threads can be (and often are) truncated a bit more, yielding thread depths of 60% to 75% of the .65p value. This makes the thread-cutting easier (yielding shorter cycle times and longer tap and die life) without a large sacrifice in thread strength. The increased truncation is quantified by the percentage of thread that it leaves in place, where the nominal full thread (where depth is about .65p) is considered 100%. For most applications, 60% to 75% threads used. In may cases 60% threads are optimal, and 75% threads are wasteful or "over-engineered" (additional resources were unnecessarily invested in creating them). To truncate the threads below 100% of nominal, different techniques are used for male and female threads. For male threads, the bar stock is "turned down" somewhat before thread cutting, so that the major diameter is reduced. Likewise, for female threads the stock material is drilled with a slightly larger tap drill, increasing the minor diameter. (The pitch diameter is not affected by these operations, which are only varying the major or minor diameters.)
This balancing of truncation versus thread strength is common to many engineering decisions involving material strength and material thickness, cost, and weight. Engineers use a number called the safety factor to quantify the increased material thicknesses or other dimension beyond the minimum required for the estimated loads on a mechanical part. Increasing the safety factor generally increases the cost of manufacture and decreases the likelihood of a failure. So the safety factor is often the focus of a business management decision when a mechanical product's cost impacts business performance and failure of the product could jeopardize human life or company reputation. For example, aerospace contractors are particularly rigorous in the analysis and implementation of safety factors, given the incredible damage that failure could do (crashed aircraft or rockets). Material thickness affects not only the cost of manufacture, but also the device's weight and therefore the cost (in fuel) to lift that weight into the sky (or orbit). The cost of failure and the cost of manufacture are both extremely high. Thus the safety factor dramatically impacts company fortunes and is often worth the additional engineering expense required for detailed analysis and implementation.
Tapered threads are used on fasteners and pipe. A common example of a fastener with a tapered thread is a wood screw.
The threaded pipes used in some plumbing installations for the delivery of fluids under pressure have a threaded section that is slightly conical. Examples are the NPT and BSP series. The seal provided by a threaded pipe joint is created when a tapered externally threaded end is tightened into an end with internal threads. Normally a good seal requires the application of a separate sealant in the joint, such as thread seal tape, or a liquid or paste pipe sealant such as pipe dope, however some threaded pipe joints do not require a separate sealant.
Standardization of screw threads has evolved since the early nineteenth century to facilitate compatibility between different manufacturers and users. The standardization process is still ongoing; in particular there are still (otherwise identical) competing metric and inch-sized thread standards widely used. Standard threads are commonly identified by short letter codes (M, UNC, etc.) which also form the prefix of the standardized designations of individual threads.
Additional product standards identify preferred thread sizes for screws and nuts, as well as corresponding bolt head and nut sizes, to facilitate compatibility between spanners (wrenches) and other tools.
ISO standard threads
These were standardized by the International Organization for Standardization (ISO) in 1947. Although metric threads were mostly unified in 1898 by the International Congress for the standardization of screw threads, separate metric thread standards were used in France, Germany, and Japan, and the Swiss had a set of threads for watches.
Other current standards
In particular applications and certain regions, threads other than the ISO metric screw threads remain commonly used, sometimes because of special application requirements, but mostly for reasons of backwards compatibility:
- ASME B1.1 Unified Inch Screw Threads, (UN and UNR Thread Form), considered an American National Standard (ANS) widely use in the US and Canada
- Unified Thread Standard (UTS), which is still the dominant thread type in the United States and Canada. This standard includes:
- Unified Coarse (UNC), commonly referred to as "National Coarse" or "NC" in retailing.
- Unified Fine (UNF), commonly referred to as "National Fine" or "NF" in retailing.
- Unified Extra Fine (UNEF)
- Unified Special (UNS)
- National pipe thread (NPT), used for plumbing of water and gas pipes, and threaded electrical conduit.
- NPTF (National Pipe Thread Fuel)
- British Standard Whitworth (BSW), and for other Whitworth threads including:
- British standard pipe thread (BSP) which exists in a taper and non taper variant; used for other purposes as well
- British Standard Pipe Taper (BSPT)
- British Association screw threads (BA), primarily electronic/electrical, moving coil meters and to mount optical lenses
- British Standard Buttress Threads (BS 1657:1950)
- British Standard for Spark Plugs BS 45:1972
- British Standard Brass a fixed pitch 26tpi thread
- Glass Packaging Institute threads (GPI), primarily for glass bottles and vials
- Power screw threads
- Camera case screws, used to mount a camera on a photographic tripod:
- ¼″ UNC used on almost all small cameras
- ⅜″ UNC for larger (and some older small) cameras
(many older cameras use ¼" BSW or ⅜" BSW threads, which in low stress applications, and if machined to wide tolerances, are for practical purposes compatible with the UNC threads)
- Royal Microscopical Society (RMS) thread, also known as society thread, is a special 0.8" diameter x 36 thread-per-inch (tpi) Whitworth thread form used for microscope objective lenses.
- Microphone stands:
- ⅝″ 27 threads per inch (tpi) Unified Special thread (UNS, USA and the rest of the world)
- ¼″ BSW (not common in the USA, used in the rest of the world)
- ⅜″ BSW (not common in the USA, used in the rest of the world)
- Stage lighting suspension bolts (in some countries only; some have gone entirely metric, others such as Australia have reverted to the BSW threads, or have never fully converted):
- ⅜″ BSW for lighter luminaires
- ½″ BSW for heavier luminaires
- Tapping screw threads (ST) – ISO 1478
- Aerospace inch threads (UNJ) – ISO 3161
- Aerospace metric threads (MJ) – ISO 5855
- Tyre valve threads (V) – ISO 4570
- Metal bone screws (HA, HB) – ISO 5835
- Panzergewinde (Pg) (German) is an old German 80° thread (DIN 40430) that remained in use until 2000 in some electrical installation accessories in Germany.
- Fahrradgewinde (Fg) (English: bicycle thread) is a German bicycle thread standard (per DIN 79012 and DIN 13.1), which encompasses a lot of CEI and BSC threads as used on cycles and mopeds everywhere (http://www.fahrradmonteur.de/fahrradgewinde.php)
- CEI (Cycle Engineers Institute, used on bicycles in Britain and possibly elsewhere)
- Edison base Incandescent light bulb holder screw thread
- Fire hose connection (NFPA standard 194)
- Hose Coupling Screw Threads (ANSI/ASME B1.20.7-1991 [R2003]) for garden hoses and accessories
- Löwenherz thread, a German metric thread used for measuring instruments
- Sewing machine thread
History of standardization
The first historically important intra-company standardization of screw threads began with Henry Maudslay around 1800, when the modern screw-cutting lathe made interchangeable V-thread machine screws a practical commodity. During the next 40 years, standardization continued to occur on the intra-company and inter-company level. No doubt many mechanics of the era participated in this zeitgeist; Joseph Clement was one of those whom history has noted. In 1841, Joseph Whitworth created a design that, through its adoption by many British railroad companies, became a national standard for the United Kingdom called British Standard Whitworth. During the 1840s through 1860s, this standard was often used in the United States and Canada as well, in addition to myriad intra- and inter-company standards. In April 1864, William Sellers presented a paper to the Franklin Institute in Philadelphia, proposing a new standard to replace the U.S.'s poorly standardized screw thread practice. Sellers simplified the Whitworth design by adopting a thread profile of 60° and a flattened tip (in contrast to Whitworth's 55° angle and rounded tip). The 60° angle was already in common use in America, but Sellers's system promised to make it and all other details of threadform consistent.
The Sellers thread, easier for ordinary machinists to produce, became an important standard in the U.S. during the late 1860s and early 1870s, when it was chosen as a standard for work done under U.S. government contracts, and it was also adopted as a standard by highly influential railroad industry corporations such as the Baldwin Locomotive Works and the Pennsylvania Railroad. Other firms adopted it, and it soon became a national standard for the U.S., later becoming generally known as the United States Standard thread (USS thread). Over the next 30 years the standard was further defined and extended and evolved into a set of standards including National Coarse (NC), National Fine (NF), and National Pipe Taper (NPT). Meanwhile, in Britain, the British Association screw threads were also developed and refined.
During this era, in continental Europe, the British and American threadforms were well known, but also various metric thread standards were evolving, which usually employed 60° profiles. Some of these evolved into national or quasi-national standards. They were mostly unified in 1898 by the International Congress for the standardization of screw threads at Zurich, which defined the new international metric thread standards as having the same profile as the Sellers thread, but with metric sizes. Efforts were made in the early 20th century to convince the governments of the U.S., UK, and Canada to adopt these international thread standards and the metric system in general, but they were defeated with arguments that the capital cost of the necessary retooling would drive some firms from profit to loss and hamper the economy. (The mixed use of dueling inch and metric standards has since cost much, much more, but the bearing of these costs has been more distributed across national and global economies rather than being borne up front by particular governments or corporations, which helps explain the lobbying efforts.)
Sometime between 1912 and 1916, the Society of Automobile Engineers (SAE) created an "SAE series" of screw thread sizes to augment the USS standard.
During the late 19th and early 20th centuries, engineers found that ensuring the reliable interchangeability of screw threads was a multi-faceted and challenging task that was not as simple as just standardizing the major diameter and pitch for a certain thread. It was during this era that more complicated analyses made clear the importance of variables such as pitch diameter and surface finish.
A tremendous amount of engineering work was done throughout World War I and the following interwar period in pursuit of reliable interchangeability. Classes of fit were standardized, and new ways of generating and inspecting screw threads were developed (such as production thread-grinding machines and optical comparators). Therefore, in theory, one might expect that by the start of World War II, the problem of screw thread interchangeability would have already been completely solved. Unfortunately, this proved to be false. Intranational interchangeability was widespread, but international interchangeability was less so. Problems with lack of interchangeability among American, Canadian, and British parts during World War II led to an effort to unify the inch-based standards among these closely allied nations, and the Unified Thread Standard was adopted by the Screw Thread Standardization Committees of Canada, the United Kingdom, and the United States on November 18, 1949 in Washington, D.C., with the hope that they would be adopted universally. (The original UTS standard may be found in ASA (now ANSI) publication, Vol. 1, 1949.) UTS consists of Unified Coarse (UNC), Unified Fine (UNF), Unified Extra Fine (UNEF) and Unified Special (UNS). The standard was not widely taken up in the UK, where many companies continued to use the UK's own British Association (BA) standard.
However, internationally, the metric system was eclipsing inch-based measurement units. In 1947, the ISO was founded; and in 1960, the metric-based International System of Units (abbreviated SI from the French Système International) was created. With continental Europe and much of the rest of the world turning to SI and the ISO metric screw thread, the UK gradually leaned in the same direction. The ISO metric screw thread is now the standard that has been adopted worldwide and has mostly displaced all former standards, including UTS. In the U.S., where UTS is still prevalent, over 40% of products contain at least some ISO metric screw threads. The UK has completely abandoned its commitment to UTS in favour of the ISO metric threads, and Canada is in between. Globalization of industries produces market pressure in favor of phasing out minority standards. A good example is the automotive industry; U.S. auto parts factories long ago developed the ability to conform to the ISO standards, and today very few parts for new cars retain inch-based sizes, regardless of being made in the U.S.
Even today, over a half century since the UTS superseded the USS and SAE series, companies still sell hardware with designations such as "USS" and "SAE" to convey that it is of inch sizes as opposed to metric. Most of this hardware is in fact made to the UTS, but the labeling and cataloging terminology is not always precise.
Engineering drawing
In American engineering drawings, ANSI Y14.6 defines standards for indicating threaded parts. Parts are indicated by their nominal diameter (the nominal major diameter of the screw threads), pitch (number of threads per inch), and the class of fit for the thread. For example, “.750-10UNC-2A” is male (A) with a nominal major diameter of 0.750 in, 10 threads per inch, and a class-2 fit; “.500-20UNF-1B” would be female (B) with a 0.500 in nominal major diameter, 20 threads per inch, and a class-1 fit. An arrow points from this designation to the surface in question.
There are many ways to generate a screw thread, including the traditional subtractive types (e.g., various kinds of cutting [single-pointing, taps and dies, die heads, milling]; molding; casting [die casting, sand casting]; forming and rolling; grinding; and occasionally lapping to follow the other processes); newer additive techniques; and combinations thereof.
- Inspection of thread geometry is discussed at Threading (manufacturing) > Inspection.
See also
|Wikimedia Commons has media related to: Screw threads|
- Acme Thread Form
- Bicycle thread
- Buttress Thread Form
- Dryseal Pipe Threads Form
- Filter thread
- Garden hose thread form
- Metric: M Profile Thread Form
- National Thread Form
- National Pipe Thread Form
- Nut (hardware)
- Tapered thread
- Thread pitch gauge
- Degarmo, Black & Kohser 2003, p. 741.
- Brown, Sheldon. "Bicycle Glossary: Pedal". Sheldon Brown. Retrieved 2010-10-19.
- Bhandari, p. 205.
- ISO 1222:2010 Photography -- Tripod connections
- Löwenherz thread
- Ryffel 1988, p. 1603.
- Sewing machine thread
- Roe 1916, pp. 9–10.
- ASME 125th Anniversary: Special 2005 Designation of Landmarks: Profound Influences in Our Lives: The United States Standard Screw Threads
- Roe 1916, pp. 248–249.
- Roe 1916, p. 249.
- Wilson pp. 77–78 (page numbers may be from an earlier edition).
- Bhandari, V B (2007), Design of Machine Elements, Tata McGraw-Hill, ISBN 978-0-07-061141-2.
- Degarmo, E. Paul; Black, J T.; Kohser, Ronald A. (2003), Materials and Processes in Manufacturing (9th ed.), Wiley, ISBN 0-471-65653-4.
- Green, Robert E. et al. (eds) (1996), Machinery's Handbook (25 ed.), New York, NY, USA: Industrial Press, ISBN 978-0-8311-2575-2.
- Roe, Joseph Wickham (1916), English and American Tool Builders, New Haven, Connecticut: Yale University Press, LCCN 16011753. Reprinted by McGraw-Hill, New York and London, 1926 (LCCN 27-24075); and by Lindsay Publications, Inc., Bradley, Illinois, (ISBN 978-0-917914-73-7).
- Wilson, Bruce A. (2004), Design Dimensioning and Tolerancing (4th ed.), Goodheart-Wilcox, ISBN 1-59070-328-6.
- International Thread Standards
- ModelFixings - Thread Data
- NASA RP-1228 Threaded Fastener Design Manual[dead link] | http://en.wikipedia.org/wiki/Screw_thread | 13 |
16 | Sweden's human history began around 10, 000 years ago at the end of the last ice age, once the Scandinavian ice sheet had melted. Tribes from central Europe migrated into the south of Sweden, and ancestors of the Sami people hunted reindeer from Siberia into the northern regions.
These nomadic Stone Age hunter-gatherers gradually made more permanent settlements, keeping animals, catching fish and growing crops. A typical relic of this period (3000 BC to 1800 BC) is the gångrift, a dolmen or rectangular passage-tomb covered with capstones, then a mound of earth. Pottery, amber beads and valuable flint tools were buried with the dead. The island of Öland, in southeast Sweden, is a good place to see clusters of Stone Age barrows.
As the climate improved between 1800 BC and 500 BC, Bronze Age cultures blossomed. Their hällristningar (rock carvings) are found in many parts of Sweden - Dalsland and Bohuslän are particularly rich areas. The carvings provide tantalising glimpses of forgotten beliefs, with the sun, hunting scenes and ships being favourite themes. Huge Bronze Age burial mounds, such as Kiviksgraven in Österlen, suggest that powerful chieftains had control over spiritual and temporal matters. Relatively few bronze artefacts are found in Sweden: the metals had to be imported from central Europe in exchange for furs, amber and other northern treasures.
After 500 BC, the Iron Age brought about technological advances, demonstrated by archaeological finds of agricultural tools, graves and primitive furnaces. During this period, the runic alphabet arrived, probably from the Germanic region. It was used to carve inscriptions onto monumental rune stones (there are around 3000 in Sweden) well into medieval times.
By the 7th century AD, the Svea people of the Mälaren valley (just west of Stockholm) had gained supremacy, and their kingdom ('Svea Rike', or Sverige) gave the country of Sweden its name. Birka, founded around 760 on Björkö (an island in Mälaren lake), was a powerful Svea centre for around 200 years. Large numbers of Byzantine and Arab coins have been found there, and stones with runic inscriptions are scattered across the area.
Scandinavia's greatest impact on world history probably occurred during the Viking Age (around 800 to 1100), when hardy pagan Norsemen set sail for other shores. In Sweden, it's generally thought that population pressures were to blame for the sudden exodus: a polygamous society led to an excess of male heirs and ever-smaller plots of land. Combined with the prospects of military adventure and foreign trade abroad, the result was the Viking phenomenon (the word is derived from vik, meaning 'bay' or 'cove', and is probably a reference to their anchorages during raids).
The Vikings sailed a new type of boat that was fast and highly manoeuvrable but sturdy enough for ocean crossings, with a heavy keel, up to 16 pairs of oars and a large square sail (the Äskekärr Ship, Sweden's only original Viking vessel, is in Göteborg's Stadsmuseum. Initial hit-and-run raids along the European coast - often on monasteries and their terrified monks - were followed by major military expeditions, settlement and trade. The well-travelled Vikings penetrated the Russian heartland and beyond, venturing as far as America, Constantinople (modern-day Istanbul) and Baghdad.
In Sweden, the Vikings generally cremated their dead and their possessions, then buried the remains under a mound. There are also several impressive stone ship settings, made from upright stones arranged in the shape of a ship. If you're interested in Viking culture, Foteviken on the southwestern Falsterbo Peninsula is a 'living' reconstruction of a Viking village.
Early in the 9th century, the missionary St Ansgar established a church at Birka. Sweden's first Christian king, Olof Skötkonung (c 968-1020) is said to have been baptised at St Sigfrid's Well in Husaby in 1008 - the well is now a sort of place of pilgrimage for Swedes - but worship continued in Uppsala's pagan temple until at least 1090. By 1160, King Erik Jedvarsson (Sweden's patron saint, St Erik) had virtually destroyed the last remnants of paganism.
Olof Skötkonung was also the first king to rule over both the Sveas and the Gauts, creating the kernel of the Swedish state. During the 12th and 13th centuries, these united peoples mounted a series of crusades to Finland, Christianising the country and steadily absorbing it into Sweden.
Royal power disintegrated over succession squabbles in the 13th century. The medieval statesman Birger Jarl (1210-66) rose to fill the gap, acting as prince regent for 16 years, and founding the city of Stockholm in 1252.
King Magnus Ladulås (1240-90) introduced a form of feudalism in 1280, but managed to avoid its worst excesses. In fact, the aristocracy were held in check by the king, who forbade them from living off the peasantry when moving from estate to estate.
Magnus' eldest son Birger (1280-1321) assumed power in 1302. After long feuds with his younger brothers, he tricked them into coming to Nyköping castle, where he threw them into the dungeon and starved them to death. After this fratricidal act, the nobility drove Birger into exile. They then chose their own king of Sweden, the infant grandson of King Haakon V of Norway. When Haakon died without leaving a male heir, the kingdoms of Norway and Sweden were united (1319).
The increasingly wealthy church began to show its might in the 13th and 14th centuries, commissioning monumental buildings such as the domkyrka (cathedral) in Linköping (founded 1250), and Scandinavia's largest Gothic cathedral in Uppsala (founded 1285).
However, in 1350 the rise of state and church endured a horrific setback, when the Black Death swept through the country, carrying off around a third of the Swedish population. In the wake of the horror, St Birgitta (1303-73) reinvigorated the church with her visions and revelations, and founded a nunnery and cathedral in Vadstena, which became Sweden's most important pilgrimage site.
A strange phenomenon of the time was the German-run Hanseatic League, a group of well-organised merchants who established walled trading towns in Germany and along the Baltic coast. In Sweden, they built Visby and maintained a strong presence in the young city of Stockholm. Their rapid growth caused great concern around the Baltic in the 14th century: an allied Scandinavian front was vital. Negotiated by the Danish regent Margrethe, the Union of Kalmar (1397) united Denmark, Norway and Sweden under one crown.
Erik of Pomerania, Margrethe's nephew, held that crown until 1439. High taxation to fund wars against the Hanseatic League made him deeply unpopular and he was eventually deposed. His replacement was short-lived and succession struggles began again: two powerful Swedish families, the unionist Oxenstiernas and the nationalist Stures, fought for supremacy.
Out of the chaos, Sten Sture the Elder (1440-1503) eventually emerged as 'Guardian of Sweden' in 1470, going on to fight and defeat an army of unionist Danes at the Battle of Brunkenberg (1471) in Stockholm.
The failing Union's death-blow came in 1520: Christian II of Denmark invaded Sweden and killed the regent Sten Sture the Younger (1493-1520). After granting a full amnesty to Sture's followers, Christian went back on his word: 82 of them were arrested, tried and massacred in Stockholm's main square, Stortorget in Gamla Stan, which 'ran with rivers of blood'.
The brutal 'Stockholm Bloodbath' sparked off a major rebellion under the leadership of the young nobleman Gustav Ericsson Vasa (1496-1560). It was a revolution that almost never happened: having failed to raise enough support, Gustav was fleeing for the Norwegian border when two exhausted skiers caught him up to tell him that the people had changed their minds. This legendary ski journey is celebrated every year in the Vasaloppet race between Sälen and Mora.
In 1523, Sweden seceded from the union and installed Gustav as the first Vasa king: he was crowned on 6 June, now the country's national day.
Gustav I ruled for 37 years, leaving behind a powerful, centralised nation-state. He introduced the Reformation to Sweden (principally as a fundraising exercise): ecclesiastical property became the king's, and the Lutheran Protestant Church was placed under the crown's direct control.
After Gustav Vasa's death in 1560, bitter rivalry broke out among his sons. His eldest child, Erik XIV (1533-77), held the throne for eight years in a state of increasing paranoia. After committing a trio of injudicious murders at Uppsala Slott, Erik was deposed by his half-brother Johan III (1537-92) and poisoned with pea soup at Örbyhus Slott. During the brothers' reigns, the Danes tried and failed to reassert sovereignty over Sweden in the Seven Years War (1563-70).
Gustav's youngest son, Karl IX (1550-1611), finally had a chance at the throne in 1607, but was unsuccessful militarily and ruled for a mere four years. He was succeeded by his 17-year-old son. Despite his youth, Gustav II Adolf (1594-1632) proved to be a military genius, recapturing southern parts of the country from Denmark and consolidating Sweden's control over the eastern Baltic (the copper mine at Falun financed many of his campaigns). A devout Lutheran, Gustav II supported the German Protestants during the Thirty Years War (1618-48). He invaded Catholic Poland and defeated his cousin King Sigismund III, later meeting his own end in battle in 1632.
Gustav II's daughter, Kristina, was still a child in 1632, and her regent continued her father's warlike policies. In 1654, Kristina abdicated in favour of Karl X Gustav, ending the Vasa dynasty.
For an incredible glimpse into this period, track down Sweden's 17th-century royal warship Vasa (commissioned by Gustav II in 1625), now in Stockholm's Vasamuseet.
The zenith and collapse of the Swedish empire happened remarkably quickly. During the harsh winter of 1657, Swedish troops invaded Denmark across the frozen Kattegatt, a strait between Sweden and Denmark, and the last remaining parts of southern Sweden still in Danish hands were handed over at the Peace of Roskilde. Bohuslän, Härjedalen and Jämtland were seized from Norway, and the empire reached its maximum size when Sweden established a short-lived American colony in what is now Delaware.
The end of the 17th century saw a developing period of enlightenment in Sweden; Olof Rudbeck achieved widespread fame for his medical work, which included the discovery of the lymphatic system.
Inheritor of this huge and increasingly sophisticated country was King Karl XII (1681-1718). Karl XII was an overenthusiastic military adventurer who spent almost all of his reign at war: he managed to lose Latvia, Estonia and Poland, and the Swedish coast sustained damaging attacks from Russia. Karl XII also fought the Great Nordic War against Norway throughout the early 18th century. A winter siege of Trondheim took its toll on his battle-weary army, and Karl XII was mysteriously shot dead while inspecting his troops - a single event that sealed the fate of Sweden's military might.
During the next 50 years, parliament's power increased and the monarchs became little more than heads of state. Despite the country's decline, intellectual enlightenment streaked ahead and Sweden produced some celebrated writers, philosophers and scientists, including Anders Celsius, whose temperature scale bears his name; Carl Scheele, the discoverer of chlorine; and Carl von Linné (Linnaeus), the great botanist who developed theories about plant reproduction.
Gustav III (1746-92) curtailed parliamentary powers and reintroduced absolute rule in 1789. He was a popular and cultivated king who inaugurated the Royal Opera House in Stockholm (1782), and opened the Swedish Academy of Literature (1786), now known for awarding the annual Nobel Prize for literature. His foreign policy was less auspicious and he was considered exceptionally lucky to lead Sweden intact through a two-year war with Russia (1788-90). Enemies in the aristocracy conspired against the king, hiring an assassin to shoot him at a masked ball in 1792.
Gustav IV Adolf (1778-1837), Gustav III's son, assumed the throne and got drawn into the Napoleonic Wars, permanently losing Finland (one-third of Sweden's territory) to Russia. Gustav IV was forced to abdicate, and his uncle Karl XIII took the Swedish throne under a new constitution that ended unrestricted royal power.
Out of the blue, Napoleon's marshal Jean-Baptiste Bernadotte (1763-1844) was invited by a nobleman, Baron Mörner, to succeed the childless Karl XIII to the Swedish throne. The rest of the nobility adjusted to the idea and Bernadotte took up the offer, along with the name Karl Johan. Karl Johan judiciously changed sides in the war, and led Sweden, allied with Britain, Prussia and Russia, against France and Denmark.
After Napoleon's defeat, Sweden forced Denmark to swap Norway for Swedish Pomerania (1814). The Norwegians objected, defiantly choosing king and constitution, and Swedish troops occupied most of the country. This forced union with Norway was Sweden's last military action.
Industry arrived late in Sweden (during the second half of the 19th century), but when it did come, it transformed the country from one of Western Europe's poorest to one of its richest.
The Göta Canal opened in 1832, providing a valuable transport link between the east and west coasts, and development accelerated when the main railway across Sweden was completed in 1862. Significant Swedish inventions, including dynamite (Alfred Nobel) and the safety match (patented by Johan Edvard Lundstrom), were carefully exploited by government and industrialists; coupled with efficient steel-making and timber exports, they added to a growing economy and the rise of the new middle class.
However, when small-scale peasant farms were replaced with larger concerns, there was widespread discontent in the countryside, exacerbated by famine. Some agricultural workers joined the population drift from rural areas to towns. Others abandoned Sweden altogether: around one million people (an astonishing quarter of the population!) emigrated over just a few decades, mainly to America.
The transformation to an industrial society brought with it trade unions and the Social Democratic Labour Party (Social Democrats for short), founded in 1889 to support workers. The party grew quickly and obtained parliamentary representation in 1896 when Hjalmar Branting was elected.
In 1905, King Oscar II (1829-1907) was forced to recognise Norwegian independence and the two countries went their separate ways.
Sweden declared itself neutral in 1912, and remained so throughout the bloodshed of WWI.
In the interwar period, a Social Democrat-Liberal coalition government took control (1921). Reforms followed quickly, including an eight-hour working day and suffrage for all adults aged over 23.
Swedish neutrality during WWII was somewhat ambiguous: allowing German troops to march through to occupy Norway certainly tarnished Sweden's image. On the other hand, Sweden was a haven for refugees from Finland, Norway, Denmark and the Baltic states; downed allied aircrew who escaped the Gestapo; and many thousands of Jews who escaped persecution and death.
After the war and throughout the 1950s and '60s the Social Democrats continued with the creation of folkhemmet, the welfare state. The standard of living for ordinary Swedes rose rapidly and real poverty was virtually eradicated.
After a confident few decades, the late 20th century saw some unpleasant surprises for Sweden, as economic pressures clouded Sweden's social goals and various sacks of dirty laundry fell out of the cupboard.
In 1986, Prime Minister Olof Palme (1927-86) was assassinated as he walked home from the cinema. The murder and bungled police inquiry shook ordinary Swedes' confidence in their country, institutions and leaders. The killing remains unsolved, but it seems most likely that external destabilisation lay behind this appalling act. Afterwards, the fortunes of the Social Democrats took a turn for the worse as various scandals came to light, including illegal arms trading in the Middle East by the Bofors company.
By late 1992, during the world recession, the country's budgetary problems culminated in frenzied speculation against the Swedish krona. In November of that year the central bank Sveriges Riksbank was forced to abandon fixed exchange rates and let the krona float freely. The currency immediately devalued by 20%, interest rates shot up by a world-record-breaking 500% and unemployment flew to 14%; the government fought back with tax hikes, punishing cuts to the welfare budget and the scrapping of previously relaxed immigration rules.
With both economy and national confidence severely shaken, Swedes narrowly voted in favour of joining the European Union (EU), effective from 1 January 1995. Since then, there have been further major reforms and the economy has improved considerably, with falling unemployment and inflation.
Another shocking political murder, of Foreign Minister Anna Lindh (1957-2003), again rocked Sweden to the core. Far-right involvement was suspected - Lindh was a vocal supporter of the euro, and an outspoken critic of both the war in Iraq and Italy's Silvio Berlusconi - but it appears that her attacker had psychiatric problems. Lindh's death occurred just before the Swedish referendum on whether to adopt the single European currency, but didn't affect the eventual outcome: a 'No' vote. | http://www.lonelyplanet.com/sweden/history | 13 |
100 | Table of Contents
- Front Material
This document contains the table of contents, introduction and other related material.
- Lesson 1 - Why Save?
Following an introduction that defines saving, the students discuss the idea of "paying yourself first" and the reasons why people save. After reporting on their small-group discussions, the students simulate the accumulation of simple interest and compound interest. The lesson concludes with students calculating both simple interest and, using the Rule of 72, the amount of time it takes savings to double when interest is compounded.
- Lesson 2 - Investors and Investments
In this lesson the students explore different types of investments, some of which are unconventional, in order to grasp the basic idea that investment involves trading off present benefits for future satisfaction. The students also apply the criteria of risk, return and liquidity to define more precisely the meaning of investing.
- Lesson 3 - Invest in Yourself
To explore the concept that people invest in themselves through education, the students work in two groups and participate in a mathematics game. Both groups are assigned mathematics problems to solve. One group is told about a special technique for solving the problems. The other group is not. The game helps the students recognize that improved human capital allows people to produce more in the same amount of time - in this example, more correct answers in the same time or less. Next, the students identify the human capital required for a variety of jobs. Finally, they learn about the connections among investment in human capital, careers and earning potential.
- Lesson 4 - What Is a Stock?
The students work in small groups that represent households. Each household answers mathematics and economics questions. For each correct answer, a household earns shares of stock. At the end of the game, the groups that answered all questions correctly receive a certificate good for 150 shares of stock in The Economics and Mathematics Knowledge Company. They also receive dividends based on their shares. Those who answered fewer questions correctly receive fewer shares and smaller dividends. Finally, the students participate in a role play to learn more about stocks.
- Lesson 5 - Reading the Financial Pages: In Print and Online
The students learn how to read and understand information presented in the financial pages of newspapers and online sources. Working in pairs, they examine entries for stocks, mutual funds and corporate bonds. They participate in a scavenger hunt for financial information, using a local newspaper. They learn how to follow stocks online.
- Lesson 6 - What Is a Bond?
In this lesson the students learn what bonds are and how bonds work. They learn the basic terminology related to bonds and participate in a simulation activity aimed at showing that bonds are certificates of indebtedness, similar to an IOU note. Finally, the students explore credit ratings and calculate average coupon rates for various bond ratings in order to determine the relationship between ratings and bond coupons.
- Lesson 7 - What Are Mutual Funds?
The students form class investment clubs that work much in the way mutual funds do. They invest $3,000 (300 shares at $10 a share) in up to six stocks. One year later they revalue their shares and determine whether a share in their class investment clubs has increased or decreased in value. Finally, they read about mutual funds and learn that the concept behind mutual funds is similar to the concept behind their class investment clubs.
- Lesson 8 - How to Buy and Sell Stocks and Bonds
In this lesson the students learn about the financial markets in which stocks and bonds are bought and sold. They read about the high transaction costs that individual investors would experience if there were no financial markets. They perform a play that illustrates how an individual stock transaction is made in an organized financial market. Finally, the students discuss the options available for buying and selling stocks and bonds.
- Lesson 9 - What Is a Stock Market?
The students are introduced to the key characteristics of a market economy through a brief simulation and a discussion of several examples drawn from their own experiences. Then they learn about differences among the three major stock markets in the United States and place sample stocks in each of the three markets using this knowledge.
- Lesson 10 - The Language of Financial Markets
The students work in small groups to make flash cards to display terms commonly used in financial markets. The terms are grouped in five categories: Buying and Selling in the Market; Exchanges and Indexes; People in Financial Markets; Stocks, Bonds and Mutual Funds; Technical Terms. Each group of students begins by learning the terms in one category. Then the students pass their flash cards from group to group until everyone has had an opportunity to learn all the terms. The lesson concludes with a Language of Financial Markets Bee.
- Lesson 11 - Financial Institutions in the U.S. Economy
The students participate in a brief trading activity to illustrate the role institutions play in bringing savers and borrowers together, thus channeling savings to investment. The students discuss financial institutions, such as banks and credit unions, and they participate in a simulation activity to help them understand primary and secondary stock markets and bond markets.
- Lesson 12 - Building Wealth over the Long Term
The students are introduced to the case of Charlayne, a woman who becomes, accidentally, a millionaire. Charlayne's success, the students learn, was unexpected, but not a miracle. It can be explained by three widely understood rules for building wealth over the long term: saving early, buying and holding, and diversifying. The lesson uses Charlayne's decisions to illustrate each of these rules. It also addresses the risks and rewards associated with different forms of saving and investing.
- Lesson 13 - Researching Companies
The students apply an economic way of thinking to gathering information regarding securities. They learn that the cost of acquiring information must be compared to the anticipated benefit the information will provide. The students discuss the example of LeBron James and recognize that there is intense competition to find information about companies. They select companies to research by participating in a classroom drawing and by listing companies they know. They gather fundamental information about each of the companies they select.
- Lesson 14 - Credit: Your Best Friend or Your Worst Enemy?
The students do an exercise that shows how credit can be their worst enemy. They learn how quickly credit card balances can grow and how long it can take to pay off a credit-card debt. They also learn that credit can be their best friend. Working in small groups, they consider seven scenarios and decide in each case whether it would be wise for the people involved to use credit. They discuss their conclusions and develop a list of criteria suitable for use in making decisions about credit.
- Lesson 15 - Why Don't People Save?
The students examine risk-oriented behavior, considering why people often engage in behavior that is dangerous or unhealthy. They are introduced to the concept of cost/benefit analysis and asked to apply what they learn to questions about saving. They generate lists of savings goals and categorize those goals as short-term, medium-term and long-term. They learn why long-term goals are more difficult to achieve than short-term goals.
- Lesson 16 - What We've Learned
This lesson features a game in which the students review key vocabulary words and concepts presented in earlier lessons. The game is called Flyswatter Review. The teacher divides the class into two teams. Using transparencies, the teacher projects financial terms from the visuals onto a screen or wall. The teams compete to select the correct definition.
- Lesson 17 - How Financial Institutions Help Businesses Grow
The students read two case studies about the financing of businesses, contrasting modern approaches with approaches from the 1870s. Using transparencies, the students discuss the advantages of the corporate form of business organization over sole proprietorships and partnerships, while also discussing modern financial institutions that help American business firms grow.
- Lesson 18 - How Are Stock Prices Determined?
The students participate in a stock market simulation that shows how the price of a share of stock is determined in a competitive market. Then they analyze what happened in the simulation to learn how stock prices are discovered through supply and demand and not conspiratorially set by authorities.
- Lesson 19 - The Role of Government in Financial Markets
The students read background information about why the U.S. government has become involved in the regulation of financial markets. Then they work in small groups on five hypothetical cases illustrating common violations in the financial industry as reported by the Securities and Exchange Commission.
- Lesson 20 - The Stock Market and the Economy: Can You Forecast the Future?
The students study a graph that illustrates the phases of a typical business cycle. They also examine how stock prices affect overall consumption and investment in the economy. After studying The Conference Board's 10 leading economic indicators, the students try their hand at economic forecasting and compare their forecasts to what actually happened.
- Lesson 21 - Lessons from History: Stock Market Crashes
The students analyze information about the stock market crash of 1929 and the stock market crash of 1987. They use the information to make posters about the crashes, highlighting what happened during and after the crashes, causes of the crashes and the role of the Federal Reserve in each crash. After presenting their posters to the class, the students discuss similarities and differences between the two events and the likelihood of future stock market crashes.
- Lesson 22 - Investing Internationally: Currency Value Changes
The students examine the costs and benefits of international investing. They study the case of an investor named Lizzy who buys shares in a European mutual fund. Lizzy is surprised to learn that, while her international investment earned an excellent return in euros, she still lost money. How could that happen? Lizzie learns that the answer lies in understanding how changes in international currency exchange rates can influence the value of foreign investments. After studying Lizzie's case, the students apply their understanding to three additional cases involving international investing.
- Lesson 23 - Investing Involves Decision Making
This lesson provides an overall review and an opportunity for students to apply many of the concepts stressed in earlier lessons. The students examine different sorts of risk that come with investments. They are introduced to a fivestep decision-making model. After practicing with it, they apply the model in a simulation activity in which they act as financial advisors, offering financial advice in four cases.
This document contains the publication's glossary. | http://www.councilforeconed.org/lesson-resources/lessons/publications/publication_info.php?t=t&pid=1-56183-570-2 | 13 |
30 | Elementary Human Genetics
The Central Asian Gene Pool
The Karakalpak Gene Pool
Discussion and Conclusions
Elementary Human Genetics
Every human is defined by his or her library of genetic material, copies of which are stored in every cell of the
body apart from the red blood cells. Cells are classified as somatic, meaning body cells, or gametic, the cells
involved in reproduction, namely the sperm and the egg or ovum. The overwhelming majority of human genetic material
is located within the small nucleus at the heart of each somatic cell. It is commonly referred to as the human genome.
Within the nucleus it is distributed between 46 separate chromosomes, two of which are known as the sex chromosomes.
The latter occur in two forms, designated X and Y. Chromosomes are generally arranged in pairs - a female has 22 pairs
of autosome chromosomes plus one pair of X chromosomes, while a male has a similar arrangement apart from
having a mixed pair of X and Y sex chromosomes.
A neutron crystallography cross-sectional image of a chromosome, showing the double
strand of DNA wound around a protein core.
Image courtesy of the US Department of Energy Genomics Program
A single chromosome consists of just one DNA macromolecule composed of two separate DNA strands, each of which contains
a different but complementary sequence of four different nucleotide bases - adenine (A), thymine (T), cytasine (C), and
guanine (G). The two strands are aligned in the form of a double helix held together by hydrogen bonds, adenine always
linking with thymine and cytasine always linking with guanine. Each such linkage between strands is known as a base pair.
The total human genome contains about 3 billion such base pairs. As such it is an incredibly long molecule that could be
from 3 cm to 6 cm long were it possible to straighten it. In reality the double helix is coiled around a core of structural
proteins and this is then supercoiled to create the chromosome, 23 pairs of which reside within a cell nucleus with a
diameter of just 0.0005 cm.
A gene is a segment of the DNA nucleotide sequence within the chromosome that can be chemically read to make one specific
protein. Each gene is located at a certain point along the DNA strand, known as its locus. The 22 autosome chromosome pairs
vary in size from 263 million base pairs in chromosome 1 (the longest) down to about 47 million base pairs in chromosome 21
(the shortest - chromosome 22 is the second shortest with 50 million base pairs), equivalent to from 3,000 down to 300 genes.
The two sex genes are also very different, X having about 140 million base pairs and expressing 1,100 genes, Y having only
23 million base pairs and expressing a mere 78 genes. The total number of genes in the human genome is around 30,000.
A complete set of 23 human homologous chromosome pairs
Image courtesy of the National Human Genome Research Institute, Maryland
Each specific pair of chromosomes have their own distinct characteristics and can be identified under the microscope after
staining with a dye and observing the resulting banding. With one exception the chromosome pairs are called homologous because
they have the same length and the same sequence of genes. For example the 9th pair always contain the genes for melanin
production and for ABO blood type, while the 14th pair has two genes critical to the body's immune response. Even so the
individual chromosomes within each matching pair are not identical since each one is inherited from each parent. A certain
gene at a particular locus in one chromosome may differ from the corresponding gene in the other chromosome, one being dominant
and the other recessive. The one exception relates to the male sex chromosomes, a combination of X and Y, which are not the same
length and are therefore not homologous.
A set of male human chromosomes showing typical banding
Various forms of the same gene (or of some other DNA sequence within the chromosome) are known as alleles. Differences in DNA
sequences at a specific chromosome locus are known as genetic polymorphisms. They can be categorized into various types, the
most simple being the difference in just a single nucleotide - a single nucleotide polymorphism.
When a normal somatic cell divides and replicates, the 23 homologous chromosome pairs (the genome) are duplicated through a
complex process known as mitosis. The two strands of DNA within each chromosome unravel and unzip themselves in order to
replicate, eventually producing a pair of sister chromatids - two brand new copies of the original single chromosome joined together.
However because the two chromosomes within each homologous pair are slightly different (one being inherited from each parent)
the two sister chromatids are divided in two. The two halves of each sister chromatid are allocated to each daughter cell,
thus replicating the original homologous chromosome pair. Such cells are called diploid because they contain two (slightly different)
sets of genetic information.
The production of gametic cells involves a quite different process. Sperm and eggs are called haploid cells, meaning single,
because they contain only one set of genetic information - 22 single unpaired chromosomes and one sex chromosome. They are formed
through another complex process known as meiosis. It involves a deliberate reshuffling of the parental genome in order
to increase the genetic diversity within the resulting sperm or egg cells and consequently among any resulting offspring.
As before each chromosome pair is replicated in the form of a pair of sister chromatids. This time however, each half of each
chromatid embraces its opposite neighbour in a process called synapsis. An average of two or three segments of maternal and
paternal DNA are randomly exchanged between chromatids by means of molecular rearrangements called crossover and genetic recombination.
The new chromatid halves are not paired with their matching partners but are all separated to create four separate haploid cells,
each containing one copy of the full set of 23 chromosomes, and each having its own unique random mix of maternal and paternal DNA.
In the male adult this process forms four separate sperm cells, but in the female only one of the four cells becomes an ovum, the
other three forming small polar bodies that progressively decay.
During fertilization the two haploid cells - the sperm and the ovum or egg - interact to form a diploid zygote (zyg meaning
symetrically arranged in pairs). In fact the only contribution that the sperm makes to the zygote is its haploid nucleus containing
its set of 23 chromosomes. The sex of the offspring is determined by the sex chromosome within the sperm, which can be either
X (female) or Y (male). Clearly the sex chromosome within the ovum has to be X. The X and the Y chromosomes are very different,
the Y being only one third the size of the X. During meiosis in the male, the X chromosome recombines and exchanges DNA with
the Y only at its ends. Most of the Y chromosome is therefore unaffected by crossover and recombination. This section is known
as the non-recombining part of the Y chromosome and it is passed down the male line from father to son relatively unchanged.
Scanning electron micrograph of an X and Y chromosome
Image courtesy of Indigo Instruments, Canada
Not all of the material within the human cell resides inside the nucleus. Both egg and sperm cells contain small energy-producing
organelles within the cytoplasm called mitochondria that have their own genetic material for making several essential mitochondrial
proteins. However the DNA content is tiny in comparison with that in the cell nucleus - it consists of several rings of DNA totalling
about 16,500 base pairs, equivalent to just 13 genes. The genetic material in the nucleus is about 300,000 times larger. When
additional mitochondria are produced inside the cell, the mitochondrial DNA is replicated and copies are transferred to the
new mitochondria. The reason why mitochondrial DNA, mtDNA for short, is important is because during fertilization virtually no
mitochondria from the male cell enters the egg and those that do are tagged and destroyed. Consequently the offspring only inherit
the female mitochondria. mtDNA is therefore inherited through the female line.
Population genetics is a branch of mathematics that attempts to link changes in the overall history of a population to changes in
its genetic structure, a population being a group of interbreeding individuals of the same species sharing a common geographical
area. By analysing the nature and diversity of DNA within and between different populations we can gain insights into their
separate evolution and the extent to which they are or are not related to each other. We can gain insights into a population's
level of reproductive isolation, the minimum time since it was founded, how marriage partners were selected, past geographical
expansions, migrations, and mixings.
The science is based upon the property of the DNA molecule to occasionally randomly mutate during replication, creating the possibility
that the sequence of nucleotides in the DNA of one generation may differ slightly in the following generation. The consequence of
this is that individuals within a homogenous population will in time develop different DNA sequences, the characteristic that we
have already identified as genetic polymorphism. Because mutations are random, two identical but isolated populations will tend
to change in different directions over time. This property is known as random genetic drift and its effect is greater in smaller
To study genetic polymorphisms, geneticists look for specific genetic markers. These are clearly recognizable mutations in the
DNA whose frequency of incidence varies widely across populations from different geographical areas. In reality the vast majority
of human genetic sequences are identical, only around 0.1% of them being affected by polymorphisms.
There are several types of genetic marker. The simplest are single nucleotide polymorphisms (SNPs), mentioned above, where just
one nucleotide has been replaced with another (for example A replaces T or C replaces G). SNPs in combination along a stretch of DNA
are called haplotypes, shorthand for haploid genotypes. These have turned out to be valuable markers because they are genetically
relatively stable and are found at differing frequencies in many populations. Some are obviously evolutionarily related to each
other and can be classified into haplogroups (Hg). Another type of polymorphism is where short strands of DNA have been randomly
inserted into the genetic DNA. This results in so-called biallelic polymorphism, since the strand is either present or absent. These
are useful markers because the individuals that have the mutant insert can be traced back to a single common ancestor, while those
who do not have the insert represent the original ancestral state . Biallelic polymorphisms can be assigned to certain haplotypes.
A final type of marker is based upon microsatellites, very short sequences of nucleotides, such as GATA, that are repeated in tandem
numerous times. A polymorphism occurs if the number of repetitions increases or decreases. Microsatellite polymorphisms, sometimes
also called short-tandem-repeat polymorphisms, occur more frequently over time, providing a different tool to study the rate of
genetic change against time.
Of course the whole purpose of sexual reproduction is to deliberately scramble the DNA from both parents in order to create
a brand new set of chromosome pairs for their offspring that are not just copies of the parental chromosomes. Studies show
that about 85% of genetic variation in autosomal sequences occurs within rather than between populations.
However it is the genetic variation between populations that is of the greatest interest when we wish to study their history.
Because of this, population geneticists look for more stable pieces of DNA that are not disrupted by reproduction. These are
of two radically different types, namely the non-recombining part of the Y chromosome and the mitochondrial DNA or mtDNA. A
much higher 40% of the variations in the Y chromosome and 30% of the variations in mtDNA are found between populations. Each
provides a different perspective on the genetic evolution of a particular population.
Y Chromosome Polymorphisms
By definition the Y chromosome is only carried by the male line. Although smaller than the other chromosomes, the Y chromosome
is still enormous compared to the mtDNA. The reason that it carries so few genes is because most of it is composed of "junk" DNA.
As such it is relatively unaffected by natural selection. The non-recombining part of the Y chromosome is passed on from father
to son with little change apart from the introduction of genetic polymorphisms as a result of random mutations. The only
problem with using the Y chromosome to study inheritance has been the practical difficulty of identifying a wide range
of polymorphisms within it, although the application of special HPLC techniques has overcome some of this limitation in recent years.
Y chromosome polymorphisms seem to be more affected by genetic drift and may give a better resolution between closely related
populations where the time since their point of divergence has been relatively short.
By contrast the mtDNA is carried by the female line. Although less than one thousandth the size of the DNA in the non-recombinant
Y chromosome, polymorphisms are about 10 times more frequent in mtDNA than in autosome chromosomes.
Techniques and Applications
Population genetics is a highly statistical science and different numerical methods can be used to calculate the various properties of
one or several populations. Our intention here is to cover the main analytical tools used in the published literature relating to
Karakalpak and the other Central Asian populations.
The genetic diversity of a population is the diversity of DNA sequences within its gene pool. It is calculated by a statistical
method known as the analysis of molecular variance (AMOVA) in the DNA markers from that population. It is effectively a summation of the
frequencies of individual polymorphisms found within the sample, mathematically normalized so that a diversity of 0 implies all the
individuals in that population have identical DNA and a diversity of 1 implies that the DNA of every individual is different.
The genetic distance between two populations is a measure of the difference in their polymorphism frequencies. It is calculated
statistically by comparing the pairwise differences between the markers identified for each population, to the pairwise
differences within each of the two populations. This distance is a multi-dimensional not a linear measure. However it is normally
illustrated graphically in two dimensions. New variables are identified by means of an angular transformation, the first two of which
together account for the greatest proportion of the differences between the populations studied.
Another property that can be measured statistically is kinship - the extent to which members of a population are related to
each other as a result of a common ancestor. Mathematically, a kinship coefficient is the probability that a randomly sampled
sequence of DNA from a randomly selected locus is identical across all members of the same population. A coefficient of 1
implies everyone in the group is related while a coefficient of 0 implies no kinship at all.
By making assumptions about the manner in which genetic mutations occur and their frequency over time it is possible to work backwards
and estimate how many generations (and therefore years) have elapsed from the most recent common ancestor, the individual to
whom all the current members of the population are related by descent. This individual is not necessarily the founder of the
population. For example if we follow the descent of the Y chromosome, this can only be passed down the male line from father to son.
If a male has no sons his non-combining Y chromosome DNA is eliminated from his population for ever more. Over time, therefore, the
Y chromosomes of the populations ancestors will be progressively lost. There may well have been ancestors older than the most recent
common ancestor, even though we can find no signs for those ancestors in the Y chromosome DNA of the current population.
A similar situation arises with mtDNA in the female half of the population because some women do not have daughters.
In 1977 the American anthropologist Gordon T. Bowles published an analysis of the anthropometric characteristics of 519 different
populations from across Asia, including the Karakalpaks and two regional groups of Uzbeks. Populations were characterized by 9
standard measurements, including stature and various dimensions of the head and face. A multivariate analysis was used to separate the
different populations by their physical features.
Bowles categorized the populations across four regions of Asia (West, North, East, and South) into 19 geographical groups. He then
analysed the biological distances between the populations within each group to identify clusters of biologically similar peoples.
Central Asia was divided into Group XVII encompassing Mongolia, Singkiang, and Kazakhstan and Group XVIII encompassing Turkestan and
Tajikistan. Each Group was found to contain three population clusters:
Anthropological Cluster Analysis of Central Asia
| Group || Cluster ||Regional Populations|
|XVII ||1||Eastern Qazaqs|
Alai Valley Kyrgyz
|2||Aksu Rayon Uighur
Alma Ata Uighur
|Alma Ata Qazaqs|
T'ien Shan Kyrgyz
|Total Turkmen |
Within geographical Group XVIII, the Karakalpaks clustered with the Uzbeks of Tashkent and the Uzbeks of Samarkand. The members of this
first cluster were much more heterogeneous than the other two clusters of neighbouring peoples. Conversely the Turkmen cluster had the
lowest variance of any of the clusters in the North Asia region, showing that different Turkmen populations are closely related.
The results of this study were re-presented by Cavalli-Sforza in a more readily understandable graphical form. The coordinates used are
artificial mathematical transformations of the original 9 morphological measurements, designed to identify the distances between different
populations in a simple two-dimensional format. The first two principal coordinates identify a clear division between the Uzbek/Karakalpaks,
and the Turkmen and Iranians, but show similarities between the Uzbek/Karakalpaks and the Tajiks, and also with the western Siberians.
Though not so close there are some similarities between the Uzbek/Karakalpaks and the Qazaqs, Kyrgyz, and Mongols:
Physical Anthropology of Asia redrawn by David Richardson after Bowles 1977
First and Second Principal Coordinates
The second and third principal coordinates maintains the similarity between Uzbek/Karakalpaks and Tajiks but emphasizes the more
eastern features of the Qazaqs, Kyrgyz, and Mongols:
Physical Anthropology of Asia redrawn by David Richardson after Bowles 1977
Second and Third Principal Coordinates
The basic average morphology of the Uzbeks and Karakalpaks shows them to be of medium stature, with heads that have an average length but
an above average breadth compared to the other populations of Asia. Their faces are broad and are of maximum height. Their noses
are of average width but have the maximum length found in Asia.
Qazaqs have the same stature but have longer and broader heads. Their faces are shorter but broader, having the maximum breadth found
in Asia, while their noses too are shorter and slightly broader.
Some of these differences in features were noted by some of the early Russian visitors, such as N. N. Karazin, who observed the differences
between the Karakalpaks and the Qazaqs (who at that time were called Kirghiz) when he first entered the northern Aral delta:
"In terms of type, the Karakalpak people themselves differ noticeably from the Kirghizs: flattened Mongolian noses are already a rarity here,
cheek-bones do not stand out so, beards and eyebrows are considerably thicker - there is a noticeably strong predominance of the Turkish race."
The Central Asian Gene Pool
Western researchers tended to under represent Central Asian populations in many of the earlier studies of population genetics.
Cavalli-Sforza, Menozzi, and Piazza, 1994
In 1994 Cavalli-Sforza and two of his colleagues published a landmark study of the worldwide geographic distribution of
human genes. In order to make global comparisons the study was forced to rely upon the most commonly available genetic
markers, and analysed classical polymorphisms based on blood groups, plasma proteins, and red cell enzymes. Sadly no information
was included for Karakalpaks or Qazaqs.
Results were analysed continent by continent. The results for the different populations of Asia grouped the Uzbeks, Turkmen,
and western Turks into a central cluster, located on the borderline between the Caucasian populations of the west and south
and the populations of Northeast Asia and East Asia:
Principal Component Analysis of Asian Populations
Redrawn by David Richardson after Cavelli-Sforza et al, 1994
Comas, Calafell, Pérez-Lezaun et al, 1998
In 1993-94 another Italian team collected DNA samples from four different populations close to the Altai: Qazaq highlanders
living close to Almaty, Uighur lowlanders in the same region, and two Kyrgyz communities - one in the southern highlands,
the other in the northern lowlands of Kyrgyzstan.
The data was used in two studies, both published in 1998. In the first, by Comas et al, mtDNA polymorphisms in these
four communities were compared with other Eurasian populations in the west (Europe, Middle East, and Turkey), centre (the Altai)
and the east (Mongolia, China, and Korea). The four Central Asian populations all showed high levels of sequence diversity -
in some cases the highest in Eurasia. At the same time they were tightly clustered together, almost exactly halfway between
the western and the eastern populations, the exception being that the Mongolians occupied a position close to this central
cluster. The results suggested that the Central Asian gene pool was an admixture of the western and eastern gene pools, formed
after the western and eastern Eurasians had diverged. The authors suggested that this diversity had possibly been enhanced by
human interaction along the Silk Road.
In the second, by Pérez-Lezaun et al, short-tandem-repeat polymorphisms in the Y chromosome were analysed for the
four Central Asian populations alone. Each of the four was found to be highly heterogeneous yet very different from the other
three, the latter finding appearing to contradict the mtDNA results. However the two highland groups had less genetic diversity
because each had very high frequencies for one specific polymorphism:
Y chromosome haplotype frequencies, with labels given to those shared by more than one population
From Pérez-Lezaun et al, 1998.
The researchers resolved the apparent contradiction between the two studies in terms of different migration patterns for men and women.
All four groups practised a combination of exogamy and patrilocal marriage - in other words couples within the same clan could not marry
and brides always moved from their own village to the village of the groom. Consequently the males, and their genes, were isolated and
localized, while the females were mobile and there were more similarities in their genes. The high incidence of a single marker in each
highland community was presumed to be a founder's effect, supported by evidence that the highland Qazaq community had only been
established by lowland Qazaqs a few hundred years ago.
Zerjal, Spencer Wells, Yuldasheva, Ruzibakiev, and Tyler-Smith, 2002
In 2002 a joint Oxford University/Imperial Cancer Research Fund study was published, analysing Y chromosome polymorphisms in 15 different
Central Asian populations, from the Caucasus to Mongolia. It included Uzbeks from the eastern viloyat of Kashkadarya, Qazaqs
and Uighurs from eastern Kazakhstan, Tajiks, and Kyrgyz. Blood samples had been taken from 408 men, living mainly in villages, between 1993
and 1995. In the laboratory the Y chromosomes were initially typed with binary markers to identify 13 haplogroups. Following this,
microsatellite variations were typed in order to define more detailed haplotypes.
Haplogroup frequencies were calculated for each population and were illustrated by means of the following chart:
Haplogroup frequencies across Central Asia
From Zerjal et al, 2002.
Many of the same haplogroups occurred across the 5,000 km expanse of Central Asia, although with large variations in frequency and with
no obvious overall pattern. Haplogroups 1, 2, 3, 9, and 26 accounted for about 70% of the total sample.
Haplogroups (Hg) 1 and 3 were common in almost all populations, but the highest frequencies of Hg1 were found in Turkmen and Armenians,
while the highest frequencies of Hg3 were found in Kyrgyz and Tajiks. Hg3 was more frequent in the eastern populations, but was only
present at 3% in the Qazaqs. Hg3 is the equivalent of M17, which seems to originate from Russia and the Ukraine, a region not covered
by this survey - see Spencer Wells et al, 2001 below. Hg9 was very frequent in the Middle East and declined in importance across
Central Asia from west to east. However some eastern populations had a higher frequency - the Uzbeks, Uighurs, and Dungans.
Hg10 and its derivative Hg36 showed the opposite pattern, together accounting for 54% of haplogroups for the Mongolians and 73% for
the Qazaqs. Hg26, which is most frequently found in Southeast Asia, occurs with the highest frequencies among the Dungans (26%),
Uighurs (15%), Mongolians (13%), and Qazaqs (13%) in eastern Central Asia. Hg12 and Hg 16 are widespread in Siberia and northern Eurasia
but are rare in Central Asia except for the Turkmen and Mongolians. Hg21 was restricted to the Caucasus region.
The most obvious observation is that virtually each population is quite distinct. As an example, the Uzbeks are quite different from the
Turkmen, Qazaqs, or Mongolians. Only two populations, the Kyrgyz from central Kyrgyzstan and the Tajiks from Pendjikent, show any
The researchers measured the genetic diversity of each population using both haplogroup and microsatellite frequencies. Within Central
Asia, the Uzbeks, Uighurs, Dungans, and Mongolians exhibited high genetic diversity, while the Qazaqs, Kyrgyz, Tajiks, and Turkmen
showed low genetic diversity. These differences were explored by examining the haplotype variation within each haplogroup for each
population. Among the Uzbeks, for example, many different haplotypes are widely dispersed across all chromosomes. Among the Qazaqs,
however, the majority of the haplotypes are clustered together and many chromosomes share the same or related haplotypes.
Low diversity coupled with high frequencies of population-specific haplotype clusters are typical of populations that have experienced
a bottleneck or a founder event. The most recent common ancestor of the Tajik population was estimated to date from the early part of
the 1st millennium AD, while the most recent common ancestors of the Qazaq and Kyrgyz populations were placed in the period 1200 to
1500 AD. The authors suggested that bottlenecks might be a feature of societies like the Qazaqs and Kyrgyz with small, widely dispersed
nomadic groups, especially if they had suffered massacres during the Mongol invasion. Of course these calculations have broad confidence
intervals and must be interpreted with caution.
Microsatellite haplotype frequencies were used to investigate the genetic distances among the separate populations. The best
two-dimentional fit produces a picture with no signs of general clustering on the basis of either geography or linguistics:
Genetic distances based on micosatellite haplotypes
From Zerjal et al, 2002.
The Kyrgyz (ethnically Turkic) do cluster next to the Tajiks (supposedly of Indo-Iranian origin), but both are well separated from the
neighbouring Qazaqs. The Turkmen, Qazaqs, and Georgians tend to be isolated from the other groups, leaving the Uzbeks in a somewhat
central position, clustered with the Uighurs and Dungans.
The authors attempted to interpret the results of their study in terms of the known history of the region. The apparently underlying
graduation in haplogroup frequencies from west to east was put down to the eastward agricultural expansion out of the Middle East
during the Neolithic, some of the haplogroup markers involved being more recent than the Palaeolithic. Meanwhile Hg3 (equivalent to M17 and
Eu19), which is widespread in Central Asia, was attributed to the migration of the pastoral Indo-Iranian "kurgan culture" eastwards from
the Ukraine in the late 3rd/early 2nd millennium BC. The mountainous Caucasus region seems to have been bypassed by this migration, which
seems to have extended across Central Asia as far as the borders of Siberia and China.
Later events also appear to have left their mark. The presence of a high number of low-frequency haplotypes in Central Asian populations
was associated with the spread of Middle Eastern genes, either through merchants associated with the early Silk Route or the later spread
of Islam. Uighurs and Dungans show a relatively high Middle Eastern admixture, including higher frequencies of Hg9, which might indicate
their ancestors migrated from the Middle East to China before moving into Central Asia.
High frequencies of Hg10 and its derivative Hg36 are found in the majority of Altaic-speaking populations, especially the Qazaqs, but
also the Uzbeks and Kyrgyz. Yet its contribution west of Uzbekistan is low or undetectable. This feature is associated with the
progressive migrations of nomadic groups from the east, from the Hsiung-Nu, to the Huns, the Turks, and the Mongols. Of course Central
Asians have not only absorbed immigrants from elsewhere but have undergone expansions, colonizations and migrations of their own,
contributing their DNA to surrounding populations. Hg1, the equivalent of M45 and its derivative markers, is believed to have originated
in Central Asia and is found throughout the Caucasus and in Mongolia.
The Karakalpak Gene Pool
Spencer Wells et al, 2001
The first examination of Karakalpak DNA appeared as part of a widespread study of Eurasian Y chromosome diversity published by
Spencer Wells et al in 2001. It included samples from 49 different Eurasian groups, ranging from western Europe, Russia,
the Middle East, the Caucasus, Central Asia, South India, Siberia, and East Asia. Data on 12 other groups was taken from the literature.
In addition to the Karakalpaks, the Central Asian category included seven separate Uzbek populations selected from Ferghana to Khorezm,
along with Turkmen from Ashgabat, Tajiks from Samarkand, and Qazaqs and Uighurs from Almaty. The study used biallelic markers that were
then assigned to 23 different haplotypes. To illustrate the results the latter were condensed into 7 evolutionary-related groups.
The study found that the Uzbek, Karakalpak, and Tajik populations had the highest haplotype diversity in Eurasia, the Karakalpaks having
the third highest diversity of all 49 groups. The Qazaqs and Kyrgyz had a significantly lower diversity.
This diversity is obvious from the chart comparing haplotype frequencies across Eurasia:
Distribution of Y chromosome haplotype lineages across various Eurasian populations
From Spencer Wells et al, 2001.
Uzbeks have a fairly balanced haplotype profile, while populations in the extreme west and east are dominated by one specific haplotype
lineage - the M173 lineage in the extreme west and the M9 lineage in the extreme east and Siberia.
The Karakalpaks are remarkably similar to the Uzbeks:
Distribution of Y chromosome haplotype lineages in Uzbeks and Karakalpaks
From Spencer Wells et al, 2001.
the main differences being that Karakalpaks have a higher frequency of M9 and M130 and a lower frequency of M17 and M89 haplotype
lineages. M9 is strongly linked to Chinese and other far-eastern peoples, while M130 is associated with Mongolians and Qazaqs.
On the other hand, M17 is strong in Russia, the Ukraine, the Czech and Slovak Republics as well as in Kyrgyz populations, while M89
has a higher frequency in the west. It seems that compared to Uzbeks, the Karakalpak gene pool has a somewhat higher frequency of
haplotypes that are associated with eastern as opposed to western Eurasian populations.
In fact the differences between Karakalpaks and Uzbeks are no more pronounced than between the Uzbeks themselves. Haplotype frequencies
for the Karakalpaks tend to be within the ranges measured across the different Uzbek populations:
Comparison of Karakalpak haplotype lineage frequencies to other ethnic groups in Central Asia
|| M130|| M89 || M9
|| M45 || M173 || M17
|| Total |
||0 - 7||7-18||19-34||5-21||4-11
Statistically Karakalpaks are genetically closest to the Uzbeks from Ferghana, followed by those from Surkhandarya, Samarkand, and
finally Khorezm. They are furthest from the Uzbeks of Bukhara, Tashkent, and Kashkadarya.
These results also show the distance between the Karakalpaks and the other peoples of Central Asia and its neighbouring regions.
Next to the Uzbeks, the Karakalpaks are genetically closest to the Tatars and Uighurs. However they are quite distant from the Turkmen,
Qazaqs, Kyrgyz, Siberians, and Iranians.
The researchers produced a "neighbour-joining" tree, which clustered the studied populations into eight categories according to the
genetic distances between them. The Karakalpaks were classified into cluster VIII along with Uzbeks, Tatars, and Uighurs - the
populations with the highest genetic diversity. They appear sandwiched between the peoples of Russian and the Ukraine and the
Mongolians and Qazaqs.
Neighbour-joining tree of 61 Eurasian Populations
Karakalpaks are included in cluster VIII along with Uzbeks, Tatars, and Uighurs
From Spencer Wells et al, 2001.
Spencer Wells and his colleagues did not attempt to explain why the Karakalpak gene pool is similar to Uzbek but is different from the
Qazaq, a surprising finding given that the Karakalpaks lived in the same region as the Qazaqs of the Lesser Horde before migrating
into Khorezm. Instead they suggested that the high diversity in Central Asia might indicate that its population is among the oldest
in Eurasia. M45 is the ancestor of haplotypes M173, the predominant group found in Western Europe, and is thought to have arisen in
Central Asia about 40,000 years ago. M173 occurred about 30,000 years old, just as modern humans began their migration from Central
Asia into Europe during the Upper Palaeolithic. M17 (also known as the Eu19 lineage) has its origins in eastern Europe and the Ukraine
and may have been initially introduced into Central Asia following the last Ice Age and re-introduced later by the south-eastern migration
of the Indo-Iranian "kurgan" culture.
Comas et al, 2004
At the beginning of 2004 a complementary study was published by David Comas, based on the analysis of mtDNA haplogroups from 12 Central
Asian and neighbouring populations, including Karakalpaks, Uzbeks, and Qazaqs. Sample size was only 20, dropping to 16 for Dungans and
Uighurs, so that errors in the results for individual populations could be high.
The study reconfirmed the high genetic diversity within Central Asian populations. However a high proportion of sequences originated elsewhere,
suggesting that the region had experienced "intense gene flow" in the past.
The haplogroups were divided into three types according to their origins: West Eurasian, East Asian, and India. Populations showed a
graduation from the west to the east with the Karakalpaks occupying the middle ground, with half of their haplogroups having a western
origin and the other half having an eastern origin. Uzbek populations contained a small Indian component.
Mixture of western and eastern mtDNA haplogroups across Central Asia
|Population||West Eurasian|| East Asian
|| Total |
The researchers found that two of the haplogroups of East Asian origin (D4c and G2a) not only occurred at higher frequencies
in Central Asia than in neighbouring populations but appeared in many related but diverse forms. These may have originated as
founder mutations some 25,000 to 30,000 years ago, expanded as a result of genetic drift and subsequently become dispersed into
the neighbouring populations. Their incidence was highest in the Qazaqs, and second highest in the Turkmen and Karakalpaks.
The majority of the other lineages separate into two types with either a western or an eastern origin. They do not overlap,
suggesting that they were already differentiated before they came together in Central Asia. Furthermore the eastern group contains
both south-eastern and north-eastern components. One explanation for their admixture in Central Asia is that the region was originally
inhabited by Western people, who were then partially replaced by the arrival of Eastern people. There is genetic evidence from
archaeological sites in eastern China of a drastic shift, between 2,500 and 2,000 years ago, from a European-like population to
the present-day East Asian population.
The presence of ancient Central Asian sequences suggests it is more likely that the people of Central Asia are a mixture of two
differentiated groups of peoples who originated in west and east Eurasia respectively.
Chaix and Heyer et al, 2004
The most interesting study of Karakalpak DNA so far was published by a team of French workers in the autumn of 2004. It was based on
blood samples taken during two separate expeditions to Karakalpakstan in 2001 and 2002, organized with the assistance of IFEAC, the
Institut Français d'Etudes sur l'Asie Centrale, based in Tashkent. The samples consisted of males belonging to five different ethnic
groups: Qon'ırat Karakalpaks (sample size 53), On To'rt Urıw Karakalpaks (53), Qazaqs (50), Khorezmian Uzbeks (40), and Turkmen (51).
The study was based on the analysis of Y chromosome haplotypes from DNA extracted from white blood cells. In addition to providing
samples for DNA analysis, participants were also interviewed to gather information on their paternal lineages and tribal and clan
Unfortunately the published results only focused on the genetic relationships between the tribes, clans and lineages of these five
ethnic groups. However before reviewing these important findings it is worth looking at the more general aspects that emerged from
the five samples. These were summarized by Professor Evelyne Heyer and Dr R Chaix at a workshop on languages and genes held in France
in 2005, where the results from Karakalpakstan were compared with the results from similar expeditions to Kyrgyzstan, the Bukhara,
Samarkand, and Ferghana Valley regions of Uzbekistan, and Tajikistan as well as with some results published by other research teams.
In some cases comparisons were limited by the fact that the genetic analysis of samples from different regions was not always done
according to the same protocols.
The first outcome was the reconfirmation of the high genetic diversity among Karakalpaks and Uzbeks:
Y Chromosome Diversity across Central Asia
|Population||Region||Sample Size|| Diversity |
|Karakalpak On To'rt Urıw||Karakalpakstan||54||0.89|
|Tajik Kamangaron||Ferghana Valley||30||0.98|
|Tajik Richtan||Ferghana Valley||29||0.98|
|Kyrgyz Andijan||Uzbek Ferghana Valley||46||0.82|
|Kyrgyz Jankatalab||Uzbek Ferghana Valley||20||0.78|
|Kyrgyz Doboloo||Uzbek Ferghana Valley||22||0.70|
The high diversities found in Uighur and Tajik communities also agreed with earlier findings. Qon'ırat Karakalpaks had somewhat
greater genetic diversity than On To'rt Urıw Karakalpaks. Some of these figures are extremely high. A diversity of zero implies
a population where every individual is identical. A diversity of one implies the opposite, the haplotypes of every individual
The second more important finding concerned the Y chromosome genetic distances among different Central Asian populations. As
usual this was presented in two dimensions:
Genetic distances between ethnic populations in Karakalpakstan and the Ferghana Valley
From Chaix and Heyer et al, 2004.
The researchers concluded that Y chromosome genetic distances were strongly correlated to geographic distances. Not only are Qon'ırat
and On To'rt Urıw populations genetically close, both are also close to the neighbouring Khorezmian Uzbeks. Together they give the
appearance of a single population that has only relatively recently fragmented into three separate groups. Clearly this situation is
mirrored with the two Tajik populations living in the Ferghana Valley and also with two of the three Kyrgyz populations from the same
region. Although close to the local Uzbeks, the two Karakalpak populations have a slight bias towards the local Qazaqs.
The study of the Y chromosome was repeated for the mitichondrial DNA, to provide a similar picture for the female half of the same
populations. The results were compared to other studies conducted on other groups of Central Asians. We have redrawn the chart showing
genetic distances among populations, categorizing different ethnic groups by colour to facilitate comparisons:
Genetic distances among ethnic populations in Central Asia
Based on mitochondrial DNA polymorphisms
From Heyer, 2005.
The French team concluded that, in this case, genetic distances were not related to either geographical distances or to linguistics.
However this is not entirely true because there is some general clustering among populations of the same ethnic group, although by
no means as strong as that observed from the Y chromosome data. The three Karakalpak populations highlighted in red consist of the
On To'rt Urıw (far right), the Qon'ırat (centre), and the Karakalpak sample used in the Comas 2004 study (left). The Uzbeks are shown in green
and those from Karakalpakstan are the second from the extreme left, the latter being the Uzbeks from Samarkand. A nearby group of
Uzbeks from Urgench in Khorezm viloyati appear extreme left. There is some relationship between the mtDNA of the Karakalpak
and Uzbek populations of the Aral delta therefore, but it is much weaker than the relationship between their Y chromosome DNA. On the
other hand the Qazaqs of Karakalpakstan, the uppermost yellow square, are very closely related to the Karakalpak Qon'ırat according to
These results are similar to those that emerged from the Italian studies of Qazaq, Uighur, and Kyrgyz Y chromosome and mitochondrial
DNA. Ethnic Turkic populations are generally exogamous. Consequently the male DNA is relatively isolated and immobile because men
traditionally stay in the same village from birth until death. They had to select their wives from other geographic regions
and sometimes married women from other ethnic groups. The female DNA within these groups is consequently more diversified. The results
suggest that in the delta, some Qon'ırat men have married Qazaq women and/or some Qazaq men have married Qon'ırat women.
Let us now turn to the primary focus of the Chaix and Heyer paper. Are the tribes and clans of the Karakalpaks and other ethnic groups
living within the Aral delta linked by kinship? Y chromosome polymorphisms were analysed for each separate lineage, clan, tribe, and
ethnic group using single tandem repeats. The resulting haplotypes were used to calculate a kinship coefficient at each respective
Within the two Karakalpak samples the Qon'ırat were all Shu'llik and came from several clans, only three of which permitted the computation
of kinship: the Qoldawlı, Qıyat, and Ashamaylı clans. However none of these clans had recognized lineages. The Khorezmian Uzbeks have also
long ago abandoned their tradition of preserving genealogical lineages.
The On To'rt Urıw were composed of four tribes, four clans, and four lineages:
- Qıtay tribe
- Qıpshaq tribe, Basar clan
- Keneges tribe, Omır and No'kis clans
- Man'g'ıt tribe, Qarasıraq clan
The Qazaq and the Turkmen groups were also structured along tribal, clan, and lineage lines.
The results of the study showed that lineages, where they were still maintained, exhibited high levels of kinship, the On To'rt Urıw having
by far the highest. People belonging to the same lineage were therefore significantly more related to each other than people selected at
random in the overall global population. Put another way, they share a common ancestor who is far more recent than the common ancestor for
the population as a whole:
Kinship coefficients for five different ethnic populations, including the Qon'ırat and the On To'rt Urıw.
From Chaix and Heyer et al, 2004.
The kinship coefficients at the clan level were lower, but were still significant in three groups - the Karakalpak Qon'ırat, the Qazaqs,
and the Turkmen. However for the Karakalpak On To'rt Urıw and the Uzbeks, men from the same clan were only fractionally more related to
each other than were men selected randomly from the population at large. When we reach the tribal level we find that the men in all five
ethnic groups show no genetic kinship whatsoever.
In these societies the male members of some but not all tribal clans are partially related to varying degrees, in the sense that they are
the descendants of a common male ancestor. Depending on the clan concerned this kinship can be strong, weak, or non-existant. However the
members of different clans within the same tribe show no such interrelationship at all. In other words, tribes are conglomerations of
clans that have no genetic links with each other apart from those occurring between randomly chosen populations. It suggests that such tribes
were formed politically, as confederations of unrelated clans, and not organically as a result of the expansion and sub-division of an
initially genetically homogenous extended family group.
By assuming a constant rate of genetic mutation over time and a generation time of 30 years, the researchers were able to calculate the
number of generations (and therefore years) that have elapsed since the existence of the single common ancestor. This was essentially the
minimum age of the descent group and was computed for each lineage and clan. However the estimated ages computed were very high. For example,
the age of the Qon'ırat clans was estimated at about 460 generations or 14,000 years (late Ice Age), while the age of the On To'rt Urıw lineages
was estimated at around 200 generations or 6,000 years (early Neolithic). Clearly these results are ridiculous. The explanation is that each
group included immigrants or outsiders who were clearly unrelated to the core population.
The calculation was therefore modified, restricting the sample to those individuals who belonged to the modal haplogroup of the descent group.
This excluded about 17% of the men in the initial sample. Results were excluded for those descent groups that contained less than three
|Descent Group||Population||Number of
|Age in years||95% Confidence|
|| 35||1,058||454 - 3,704|
|| 20|| 595||255 - 2,083|
||3,051||1,307 - 10,677|
On To'rt Urıw
|| 13|| 397||170 - 1,389|
|| 415||178 - 1,451|
|| 516||221 - 1,806|
The age of the On To'rt Urıw and other lineages averaged about 15 generations, equivalent to about 400 to 500 years. The age of the clans
varied more widely, from 20 generations for the Qazaqs, to 35 generations for the Qon'ırat, and to 102 generations for the Turkmen.
This dates the oldest common ancestor of the Qazaq and Qon'ırat clans to a time some 600 to 1,200 years ago. However the common
ancestor of the Turkmen clans is some 3,000 years old. The high ages of the Turkmen clans was the result of the occurrence of a
significantly mutated haplotype within the modal haplogroup. It was difficult to judge whether these individuals were genuinely related
to the other clan members or were themselves recent immigrants.
These figures must be interpreted with considerable caution. Clearly the age of a clan's common ancestor is not the same as the age of the
clan itself, since that ancestor may have had ancestors of his own, whose lines of descent have become extinct over time. The calculated
ages therefore give us a minimum limit for the age of the clan and not the age of the clan itself.
In reality however, the uncertainty in the assumed rate of genetic mutation gives rise to extremely wide 95% confidence intervals. The
knowledge that certain Karakalpak Qon'ırat clans are most likely older than a time ranging from 450 to 3,700 years is of little practical
use to us. Clearly more accurate models are required.
Chaix, R.; Quintana-Murci, L.; Hegay, T.; Hammer, M. F.; Mobasher, Z.; Austerlitz, F.; and Heyer, E., 2007
The latest analysis of Karakalpak DNA comes from a study examining the genetic differences between various pastoral and farming populations
in Central Asia. In this region these two fundamentally different economies are organized according to quite separate social traditions:
The study aims to identify differences in the genetic diversity of the two groups as a result of these two different lifestyles. It examines
the genetic diversity of:
- pastoral populations are classified into what their members claim to be descent groups (tribes, clans, and lineages), practice exogamous
marriage (where men must marry women from clans that are different to their own), and are organized on a patrilineal basis (children being
affiliated to the descent group of the father, not the mother).
- farmer populations are organized into nuclear and extended families rather than tribes and often practise endogamous marriage (where men
marry women from within the same clan, often their cousins).
The diversity of mtDNA was examined by investigating one of two short segments, known as hypervariable segment number 1 or HVS-1. This and HVS-2
have been found to contain the highest density of neutral polymorphic variations between individuals.
- maternally inherited mitochondrial DNA in 12 pastoral and 9 farmer populations, and
- paternally inherited Y-chromosomes in 11 pastoral and 7 farmer populations.
The diversity of the Y chromosome was examined by investigating 6 short tandem repeats (STRs) in the non-recombining region of the chromosome.
This particular study sampled mtDNA from 5 different populations from Karakalpakstan: On To'rt Urıw Karakalpaks, Qon'ırat Karakalpaks,
Qazaqs, Turkmen, and Uzbeks. Samples collected as part of other earlier studies were used to provide mtDNA data on 16 further populations
(one of which was a general group of Karakalpaks) and Y chromosome data on 20 populations (two of which were On To'rt Urıw and Qon'ırat
Karakalpaks sampled in 2001 and 2002). The sample size for each population ranged from 16 to 65 individuals.
Both Karakalpak arıs were classified as pastoral, along with Qazaqs, Kyrgyz, and Turkmen. Uzbeks were classified as farmers, along with
Tajiks, Uighurs, Kurds, and Dungans.
Results of the mtDNA Analysis
The results of the mtDNA analysis are given in Table 1, copied from the paper.
Table 1. Sample Descriptions and Estimators of Genetic Diversity from the mtDNA Sequence
|Population ||n ||Location
||Long ||Lat ||H ||π
||D ||pD ||Ps
|Karakalpaks ||20 ||Uzbekistan ||58
||43 ||0.99 ||5.29 ||-1.95 ||0.01
||0.90 ||1.05 |
|Karakalpaks (On To'rt Urıw) ||53 ||Uzbekistan/Turkmenistan border
||60 ||42 ||0.99 ||5.98 ||-1.92
||0.01 ||0.70 ||1.20 |
|Karakalpaks (Qon'ırat) ||55 ||Karakalpakstan
||59 ||43 ||0.99 ||5.37 ||-2.01
||0.01 ||0.82 ||1.15 |
|Qazaqs ||50 ||Karakalpakstan
||63 ||44 ||0.99 ||5.23 ||-1.97
||0.01 ||0.88 ||1.11 |
|Qazaqs ||55 ||Kazakhstan
||80 ||45 ||0.99 ||5.66 ||-1.87
||0.01 ||0.69 ||1.25 |
|Qazaqs ||20 ||
||68 ||42 ||1.00 ||5.17 ||-1.52
||0.05 ||1.00 ||1.00 |
|Kyrgyz ||20 ||Kyrgyzstan
||74 ||41 ||0.97 ||5.29 ||-1.38
||0.06 ||0.55 ||1.33 |
|Kyrgyz (Sary-Tash) ||47 ||South Kyrgyzstan, Pamirs
||73 ||40 ||0.97 ||5.24 ||-1.95
||0.01 ||0.49 ||1.52 |
|Kyrgyz (Talas) ||48 ||North Kyrgyzstan
||72 ||42 ||0.99 ||5.77 ||-1.65
||0.02 ||0.77 ||1.14 |
|Turkmen ||51 ||Uzbekistan/Turkmenistan border
||59 ||42 ||0.98 ||5.48 ||-1.59
||0.04 ||0.53 ||1.42 |
|Turkmen ||41 ||Turkmenistan
||60 ||39 ||0.99 ||5.20 ||-2.07
||0.00 ||0.73 ||1.21 |
|Turkmen ||20 ||
||59 ||40 ||0.98 ||5.28 ||-1.71
||0.02 ||0.75 ||1.18 |
|Dungans ||16 ||Kyrgyzstan
||78 ||41 ||0.94 ||5.27 ||-1.23
||0.12 ||0.31 ||1.60 |
|Kurds ||32 ||Turkmenistan
||59 ||39 ||0.97 ||5.61 ||-1.35
||0.05 ||0.41 ||1.52 |
|Uighurs ||55 ||Kazakhstan
||82 ||47 ||0.99 ||5.11 ||-1.91
||0.01 ||0.62 ||1.28 |
|Uighurs ||16 ||Kyrgyzstan
||79 ||42 ||0.98 ||4.67 ||-1.06
||0.15 ||0.63 ||1.23 |
|Uzbeks (North) ||40 ||Karakalpakstan
||60 ||43 ||0.99 ||5.49 ||-2.03
||0.00 ||0.68 ||1.21 |
|Uzbeks (South) ||42 ||Surkhandarya, Uzbekistan
||67 ||38 ||0.99 ||5.07 ||-1.96
||0.01 ||0.81 ||1.14 |
|Uzbeks (South) ||20 ||Uzbekistan
||66 ||40 ||0.99 ||5.33 ||-1.82
||0.02 ||0.90 ||1.05 |
|Uzbeks (Khorezm) ||20 ||Khorezm, Uzbekistan
||61 ||42 ||0.98 ||5.32 ||-1.62
||0.04 ||0.70 ||1.18 |
|Tajiks (Yagnobi) ||20 ||
||71 ||39 ||0.99 ||5.98 ||-1.76
||0.02 ||0.90 ||1.05 |
Key: the pastoral populations are in the grey area; the farmer populations are in the white area.
The table includes the following parameters:
- sample size, n, the number of individuals sampled in each population. Individuals had to be unrelated to any other member of the same sample
for at least two generations.
- the geographical longitude and latitude of the population sampled.
- heterozygosity, H, the proportion of different alleles occupying the same position in each mtDNA sequence. It measures the frequency of
heterozygotes for a particular locus in the genetic sequence and is one of several statistics indicating the level of genetic variation or
polymorphism within a population. When H=0, all alleles are the same and when H=1, all alleles are different.
- the mean number of pairwise differences, π, measures the average number of nucleotide differences between all pairs of HVS-1 sequences.
This is another statistic indicating the level of genetic variation within a population, in this case measuring the level of mismatch
- Tajima’s D, D, measures the frequency distribution of alleles in a nucleotide sequence and is based on the difference between two estimations
of the population mutation rate. It is often used to distinguish between a DNA sequence that has evolved randomly (D=0) and one that has experienced directional selection favouring a single allele. It is consequently used as a test for natural selection. However it is also influenced by population history and negative values of D can indicate high rates of population growth.
- the probability that D is significantly different from zero, pD.
- the proportion of singletons, Ps, measures the relative number of unique polymorphisms in the sample. The higher the proportion of singletons,
the greater the population has been affected by inward migration.
- the mean number of individuals carrying the same mtDNA sequence, C, is an inverse measure of diversity. The more individuals with the same
sequence, the less diversity within the population and the higher proportion of individuals who are closely related.
The table shows surprisingly little differentiation between pastoral and farmer populations. Both show high levels of within population
genetic diversity (for both groups, median H=0.99 and π is around 5.3). Further calculations of genetic distance between populations, Fst, (
not presented in the table but given graphically in the online reference below) showed a corresponding low level of genetic differentiation
among pastoral populations as well as among farmer populations.
Both groups of populations also showed a significantly negative Tajima’s D, which the authors attribute to a high rate of demographic growth in
neutrally evolving populations.
Supplementary data made available online showed a weak correlation between genetic distance, Fst, and geographic distance for both pastoral and
farmer populations. Click here for redirection to the relevant
Results of the Y chromosome Analysis
The results of the Y chromosome analysis are given in Table 2, also copied from the paper:
Table 1. Sample Descriptions and Estimators of Genetic Diversity from the Y chromosome STRs
|Population ||n ||Location
||Long ||Lat ||H ||π
||r ||Ps ||C |
|Karakalpaks (On To'rt Urıw) ||54 ||Uzbekistan/Turkmenistan border
||60 ||42 ||0.86 ||3.40 ||1.002
||0.24 ||2.84 |
|Karakalpaks (Qon'ırat) ||54 ||Karakalpakstan
||59 ||43 ||0.91 ||3.17 ||1.003
||0.28 ||2.35 |
|Qazaqs ||50 ||Karakalpakstan
||63 ||44 ||0.85 ||2.36 ||1.004
||0.16 ||2.78 |
|Qazaqs ||38 ||Almaty, KatonKaragay, Karatutuk,
Rachmanovsky Kluchi, Kazakhstan
|68 ||42 ||0.78 ||2.86 ||1.004
||0.26 ||2.71 |
|Qazaqs ||49 ||South-east Kazakhstan
||77 ||40 ||0.69 ||1.56 ||1.012
||0.22 ||3.06 |
|Kyrgyz ||41 ||Central Kyrgyzstan (Mixed)
||74 ||41 ||0.88 ||2.47 ||1.004
||0.41 ||1.86 |
|Kyrgyz (Sary-Tash) ||43 ||South Kyrgyzstan, Pamirs
||73 ||40 ||0.45 ||1.30 ||1.003
||0.12 ||4.78 |
|Kyrgyz (Talas) ||41 ||North Kyrgyzstan
||72 ||42 ||0.94 ||3.21 ||1.002
||0.39 ||1.78 |
|Mongolians ||65 ||Ulaanbaatar, Mongolia
||90 ||49 ||0.96 ||3.37 ||1.009
||0.38 ||1.81 |
|Turkmen ||51 ||Uzbekistan/Turkmenistan border
||59 ||42 ||0.67 ||1.84 ||1.006
||0.27 ||3.00 |
|Turkmen ||21 ||Ashgabat, Turkmenistan
||59 ||40 ||0.89 ||3.34 ||1.006
||0.48 ||1.62 |
|Dungans ||22 ||Alexandrovka and Osh, Kyrgyzstan
||78 ||41 ||0.99 ||4.13 ||1.005
||0.82 ||1.10 |
|Kurds ||20 ||Bagyr, Turkmenistan
||59 ||39 ||0.99 ||3.59 ||1.009
||0.80 ||1.11 |
|Uighurs ||33 ||Almaty and Lavar, Kazakhstan
||79 ||42 ||0.99 ||3.72 ||1.007
||0.67 ||1.22 |
|Uighurs ||39 ||South East Kazakhstan
||79 ||43 ||0.99 ||3.79 ||1.008
||0.77 ||1.15 |
|Uzbeks (North) ||40 ||Karakalpakstan
||60 ||43 ||0.96 ||3.42 ||1.005
||0.48 ||1.54 |
|Uzbeks (South) ||28 ||Kashkadarya, Uzbekistan
||66 ||40 ||1.00 ||3.53 ||1.008
||0.93 ||1.04 |
|Tajiks (Yagnobi) ||22 ||Penjikent, Tajikistan
||71 ||39 ||0.87 ||2.69 ||1.012
||0.45 ||1.69 |
Key: the pastoral populations are in the grey area; the farmer populations are in the white area.
This table also includes the sample size, n, and longitude and latitude of the population sampled, as well as the heterozygosity, H, the mean
number of pairwise differences, π, the proportion of singletons, Ps, and the mean number of individuals carrying the same Y STR haplotype, C.
In addition it includes a statistical computation of the demographic growth rate, r.
In contrast to the results obtained from the mtDNA analysis, both the heterozyosity and the mean pairwise differences computed from the Y chromosome
STRs were significantly lower in the pastoral populations than in the farmer populations. Thus Y chromosome diversity has been lost in the pastoral
Conversely calculations of the genetic distance, Rst, between each of the two groups of populations showed that pastoral populations were more
highly differentiated than farmer populations. The supplemental data given online demonstrates that this is not as a result of geographic distance,
there being no perceived correlation between genetic and geographic distance in both population groups.
Finally the rate of demographic growth was found to be lower in pastoral than in farmer populations.
At first sight the results are counter-intuitive. One would expect that the diversity of mtDNA in pastoral societies would be higher than in
farming societies, because the men in those societies are marrying brides who contribute mtDNA from clans other than their own.
Similarly one would expect no great difference in Y chromosome diversity between pastoralists and farmers because both societies are patrilinear.
Leaving aside the matter of immigration, the males who contribute the Y chromosome are always selected from the local sampled population.
To understand the results, Chaix et al investigated the distribution of genetic diversity within individual populations using a statistical
technique called multi-dimensional scaling analysis or MDS. This attempts to sort or resolve a sample into its different component parts, illustrating
the results in two dimensions.
The example chosen in the paper focuses on the Karakalpak On To'rt Urıw arıs. The MDS analysis of the Y chromosome data
resolves the sample of 54 individuals into clusters, each of whom have exactly the same STR haplotypes:
Multidimensional Scaling Analysis based on the Matrix of Distance between Y STR Haplotypes
in a Specific Pastoral Population: the Karakalpak On To'rt Urıw.
Thus the sample contains 13 individuals from the O'mir clan of the Keneges tribe with the same haplotype (shown by the large cross), 10 individuals
of the Qarasıyraq clan of the Man'g'ıt tribe with the same haplotype (large diamond), and 10 individuals from the No'kis clan of the Keneges
tribe with the same haplotype (large triangle). Other members of the same clans have different haplotypes, as shown on the chart. Those close to the
so-called "identity core" group may have arisen by mutation. Those further afield might represent immigrants or adoptions.
No such clustering is observed following the MDS analysis of the mtDNA data for the same On To'rt Urıw arıs:
Multidimensional Scaling Analysis based on the Number of Differences between the Mitochondrial Sequence
in the Same Pastoral Population: the Karakalpak On To'rt Urıw.
Every individual in the sample, including those from the same clan, has a different HVS-1 sequence.
Similar MDS analyses of the different farmer populations apparently showed very few "identity cores" in the Y chromosome data and a total absence
of clustering in the mtDNA data, just as in the case of the On To'rt Urıw.
The overall conclusion was that the existence of "identity cores" was specific to the Y chromosome data and was mainly restricted to the pastoral
populations. This is reflected in the tables above, where we can see that the mean number of individuals carrying the same mtDNA sequence ranges
from about 1 to 1½ and shows no difference between pastoral and farming populations. On the other hand the mean number of individuals carrying
the same STR haplotype is low for farming populations but ranges from 1½ up to almost 5 for the pastoralists. Pastoral populations also have
a lower number of Y chromosome singletons.
Chaix et al point to three reinforcing factors to explain the existence of "identity cores" in pastoral as opposed to farming populations:
Together these factors reduce overall Y chromosome diversity.
- pastoral lineages frequently split and divide with closely related men remaining in the same sub-group, thereby reducing Y chromosome diversity,
- small populations segmented into lineages can experience strong genetic drift, creating high frequencies of specific haplotypes, and
- random demographic uncertainty in small lineage groups can lead to the extinction of some haplotypes, also reducing diversity.
To explain the similar levels of mtDNA diversity in pastoral and farmer populations, Chaix et al point to the complex rules connected with
exogamy. Qazaq men for example must marry a bride who has not had an ancestor belonging to the husband's own lineage for at least 7 generations,
while Karakalpak men must marry a bride from another clan, although she can belong to the same tribe. Each pastoral clan, therefore, is gaining
brides (and mtDNA) from external clans but is losing daughters (and mtDNA) to external clans. Such continuous and intense migration reduces mtDNA
genetic drift within the clan. This in turn lowers diversity to a level similar to that observed in farmer populations, which is in any event
already high. The process of two-way female migration effectively isolates the mtDNA structure of pastoral societies from their social structure.
One aspect overlooked by the study is that, until recent times, Karakalpak clans were geographically isolated in villages located in specific parts
of the Aral delta and therefore tended to always intermarry with one of their adjacent neighbouring clans.
In effect, the two neighbouring clans behaved like a single population, with females moving between clans in every generation. How such social
behaviour affected genetic structure was not investigated.
The Uzbeks were traditionally nomadic pastoralists and progressively became settled agricultural communities from the 16th century onwards. The
survey provided an opportunity to investigate the effect of this transition in lifestyle on the genetic structure of the Uzbek Y chromosome.
Table 2 above shows that the genetic diversity found among Uzbeks, as measured by heterozygosity and the mean number of pairwise differences,
was similar to that of the other farmer populations, as was the proportion of singleton haplotypes. Equally the mean number of individuals carrying
the same Y STR haplotype was low (1 to 1½), indicating an absence of the haplotype clustering (or "identity cores") observed in pastoral
populations. The pastoral "genetic signature" must have been rapidly eroded, especially in the case of the northern Uzbeks from Karakalpakstan,
who only settled from the 17th century onwards.
Two reasons are proposed for this rapid transformation. Firstly the early collapse and integration of the Uzbek descent groups following their
initial settlement and secondly their mixing with traditional Khorezmian farming populations, which led to the creation of genetic admixtures of
the two groups.
Of course the Karakalpak On To'rt Urıw have been settled farmers for just as long as many Khorezmian Uzbeks and cannot in any way be strictly
described as pastoralists. Indeed the majority of Karakalpak Qon'ırats have also been settled for much of the 20th century. However both
have strictly maintained their traditional pastoralist clan structure and associated system of exogamous marriage. So although their lifestyles have
changed radically , their social behaviour to date has not.
Discussion and Conclusions
The Karakalpaks and their Uzbek and Qazaq neighbours have no comprehensive recorded history, just occasional historical reports coupled
with oral legends which may or may not relate to certain historical events in their past. We therefore have no record of where or when the
Karakalpak confederation emerged and for what political or other reasons.
In the absence of solid archaeological or historical evidence, many theories have been advanced to explain the origin of the Karakalpaks.
Their official history, as taught in Karakalpak colleges and schools today, claims that the Karakalpaks are the descendants of the original
endemic nomadic population of the Khorezm oasis, most of whom were forced to leave as a result of the Mongol invasion in 1221 and the
subsequent dessication of the Aral delta following the devastation of Khorezm by Timur in the late 14th century, only returning in
significant numbers during the 18th century. We fundamentally disagree with this simplistic picture, which uncritically endures with high-
ranking support because it purports to establish an ancient Karakalpak origin and justifies tenure of the current homeland.
While population genetics cannot unravel the full tribal history of the Karakalpaks per se, it can give us important clues to
their formation and can eliminate some of the less likely theories that have been proposed.
The two arıs of the Karakalpaks, the Qon'ırat and the On To'rt Urıw, are very similar to each other genetically, especially in the
male line. Both are equally close to the Khorezmian Uzbeks, their southern neighbours. Indeed the genetic distances between the different
populations of Uzbeks scattered across Uzbekistan is no greater than the distance between many of them and the Karakalpaks. This
suggests that Karakalpaks and Uzbeks have very similar origins. If we want to find out about the formation of the Karakalpaks we should
look towards the emergence of the Uzbek (Shaybani) Horde and its eastwards migration under the leadership of Abu'l Khayr, who united much
of the Uzbek confederation between 1428 and 1468.
Like the Uzbeks, the Karakalpaks are extremely diverse genetically. One only has to spend time with them to realize that some look European,
some look Caucasian, and some look typically Mongolian. Their DNA turns out to be an admixture, roughly balanced between eastern and
western populations. Two of their main genetic markers have far-eastern origins, M9 being strongly linked to Chinese and other Far Eastern
peoples and M130 being linked to the Mongolians and Qazaqs. On the other hand, M17 is strong in Russia, the Ukraine, and Eastern Europe,
while M89 is strong in the Middle East, the Caucasus, and Russia. M173 is strong in Western Europe and M45 is believed to have originated
in Central Asia, showing that some of their ancestry goes back to the earliest inhabitants of that region. In fact the main difference
between the Karakalpaks and the Uzbeks is a slight difference in the mix of the same markers. Karakalpaks have a somewhat greater bias
towards the eastern markers. One possible cause could be the inter-marriage between Karakalpaks and Qazaqs over the past 400 years, a theory
that gains some support from the close similarities in the mitochondrial DNA of the neighbouring female Karakalpak Qon'ırat and Qazaqs
of the Aral delta.
After the Uzbeks, Karakalpaks are next closest to the Uighurs, the Crimean Tatars, and the Kazan Tatars, at least in the male line. However
in the female line the Karakalpaks are quite different from the Uighurs and Crimean Tatars (and possibly from the Kazan Tatars as well).
There is clearly a genetic link with the Tatars of the lower Volga through the male line. Of course the Volga region has been closely linked
through communications and trade with Khorezm from the earliest days.
The Karakalpaks are genetically distant from the Qazaqs and the Turkmen, and even more so from the Kyrgyz and the Tajiks. We know that the
Karakalpaks were geographically, politically, and culturally very close to the Qazaqs of the Lesser Horde prior to their migration into
the Aral delta and were even once ruled by Qazaq tribal leaders. From their history, therefore, one might have speculated that the Karakalpaks
may have been no more than another tribal group within the overall Qazaq confederation. This is clearly not so. The Qazaqs have a quite
different genetic history, being far more homogenous and genetically closer to the Mongolians of East Asia. However as we have seen, the
proximity of the Qazaqs and Karakalpaks undoubtedly led to intermarriage and therefore some level of genetic exchange.
Karakalpak Y chromosome polymorphisms show different patterns from mtDNA polymorphisms in a similar manner to that identified in certain
other Central Asian populations. This seems to be associated with the Turkic traditions of exogamy and so-called patrilocal marriage.
Marriage is generally not permissible between couples belonging to the same clan, so men must marry women from other clans, or tribes, or
in a few cases even different ethnic groups. After the marriage the groom stays in his home village and his bride moves from her village
to his. The result is that the male non-recombining part of the Y chromosome becomes localized as a result of its geographical isolation,
whereas the female mtDNA benefits from genetic mixing as a result of the albeit short range migration of young brides from different clans
One of the most important conclusions is the finding that clans within the same tribe show no sign of genetic kinship, whether the tribe
concerned is Karakalpak, Uzbek, Qazaq, or Turkmen. Indeed among the most settled ethnic groups, the Uzbeks and Karakalpak On To'rt Urıw,
there is very little kinship even at clan level. It seems that settled agricultural communities soon lose their strong tribal identity and
become more openminded to intermarriage with different neighbouring ethnic groups. Indeed the same populations place less importance on their
geneaology and no longer maintain any identity according to lineage.
It has generally been assumed that most Turkic tribal groups like the Uzbeks were formed as confederations of separate tribes and this is
confirmed by the recent genetic study of ethnic groups from Karakalpakstan. We now see that this extends to the tribes themselves, with an
absence of any genetic link between clans belonging to the same tribe. Clearly they too are merely associations of disparate groups, formed
because of some historical reason other than descent. Possible causes for such an association of clans could be geographic or economic, such
as common land use or shared water rights; military, such as a common defence pact or the construction of a shared qala; or perhaps political,
such as common allegiance to a strong tribal leader.
The history of Central Asia revolves around migrations and conflicts and the formation, dissolution, and reformation of tribal confederations,
from the Saka Massagetae and the Sarmatians, to the Oghuz and Pechenegs, the Qimek, Qipchaq, and Karluk, the Mongols and Tatars, the White and
Golden Hordes, the Shaybanid and Noghay Hordes, and finally the Uzbek, Qazaq, and Karakalpak confederations. Like making cocktails from
cocktails, the gene pool of Central Asia was constantly being scrambled, more so on the female line as a result of exogamy and patrilineal
The same tribal and clan names occur over and over again throughout the different ethnic Qipchaq-speaking populations of Central Asia,
but in different combinations and associations. Many of the names predate the formation of the confederations to which they now belong,
relating to earlier Turkic and Mongol tribal factions. Clearly tribal structures are fluid over time, with some groups withering or
being absorbed by others, while new groups emerge or are added.
When Abu'l Khayr Sultan became khan of the Uzbeks in 1428-29, their confederation consisted of at least 24 tribes, many with smaller
subivisions. The names of 6 of those tribes occur among the modern Karakalpaks. A 16th century list, based on an earlier document,
gives the names of 92 nomadic Uzbek tribes, at least 20 of which were shared by the later breakaway Qazaqs. 13 of the 92 names also
occur among the modern Karakalpaks.
Shortly after his enthronement as the Khan of Khorezm in 1644-45, Abu'l Ghazi Khan reorganized the tribal structure of the local Uzbeks
into four tüpe:
|Tüpe||Main Tribes||Secondary Tribes
|On Tort Urugh||On To'rt Urıw||Qan'glı|
|Durman, Yüz, Ming|
Shaykhs, Burlaqs, Arabs
| || ||Uyg'ır|
8 out of the 11 tribal names associated with the first three tüpe are also found within the Karakalpak tribal structure.
Clearly there is greater overlap between the Karakalpak tribes and the local Khorezmian Uzbek tribes than in the Uzbek tribes in general.
The question is whether these similarities pre-dated the Karakalpak migration into the Aral delta or are a result of later Uzbek influences?
We know that the Qon'ırat were a powerful tribe in Khorezm for Uzbeks and Karakalpaks alike. They were mentioned as one of the Karakalpak
"clans" on the Kuvan Darya [Quwan Darya] by Gladyshev in 1741 along with the Kitay, Qipchaq, Kiyat, Kinyagaz-Mangot (Keneges-Man'g'ıt), Djabin, Miton,
and Usyun. Munis recorded that Karakalpak Qon'ırat, Keneges, and Qıtay troops supported Muhammad Amin Inaq against the Turkmen in 1769.
Thanks to Sha'rigu'l Payzullaeva we have a comparison of the Qon'ırat tribal structure in the Aral Karakalpaks, the Surkhandarya Karakalpaks,
and the Khorezmian Uzbeks, derived from genealogical records:
The different status of the same Qon'ırat tribal groups among the Aral and Surkhandarya Karakalpaks and the Khorezmian Uzbeks
| Khorezmian |
|Qostamg'alı||clan||branch of tribe|| |
|Qanjıg'alı||tiıre||branch of tribe||tube|
|Shu'llik||division of arıs||clan|| |
|Tartıwlı||tiıre||branch of tribe||clan|
|Sıyraq||clan||branch of clan|| |
|Qaramoyın||tribe||branch of clan|| |
A tube is a branch of a tribe among the Khorezmian Uzbeks and a tiıre is a branch of a
clan among the Aral Karakalpaks.
The Karakalpak enclave in Surkhandarya was already established in the first half of the 18th century, some Karakalpaks fleeing
to Samarkand and beyond following the devastating Jungar attack of 1723. Indeed it may even be older - the Qon'ırat have a
legend that they came to Khorezm from the country of Zhideli Baysun in Surkhandarya. This suggests that some Karakalpaks had
originally travelled south with factions from the Shaybani Horde in the early 16th century. The fact that the Karakalpak
Qon'ırats remaining in that region have a similar tribal structure to the Khorezmian Uzbeks is powerful evidence that the tribal
structure of the Aral Karakalpaks had broadly crystallized prior to their migration into the Aral delta.
The Russian ethnographer Tatyana Zhdanko was the first academic to make an in-depth study of Karakalpak tribal structure. She
not only uncovered the similarities between the tribal structures of the Uzbek and Karakalpak Qon'ırats in Khorezm but also the
closeness of their respective customs and material and spiritual cultures. She concluded that one should not only view the
similarity between the Uzbek and Karakalpak Qon'ırats in a historical sense, but should also see the commonality of their present-
day ethnic relationships. B. F. Choriyev added that "this kind of similarity should not only be sought amongst the Karakalpak
and the Khorezmian Qon'ırats but also amongst the Surkhandarya Qon'ırats. They all have the same ethnic history."
Such ethnographic studies provide support to the findings that have emerged from the recent studies of Central Asian genetics.
Together they point towards a common origin of the Karakalpak and Uzbek confederations. They suggest that each was formed out of
the same melange of tribes and clans inhabiting the Dasht-i Qipchaq following the collapse of the Golden Horde, a vast expanse
ranging northwards from the Black Sea coast to western Siberia and then eastwards to the steppes surrounding the lower and middle
Syr Darya, encompassing the whole of the Aral region along the way.
Of course the study of the genetics of present-day populations gives us the cumulative outcome of hundreds of thousands of years of
complex human history and interaction. We now need to establish a timeline, tracking genetic changes in past populations using the
human skeletal remains retrieved from Saka, Sarmatian, Turkic, Tatar, and early Uzbek and Karakalpak archaeological burial sites. Such
studies might pinpoint the approximate dates when important stages of genetic intermixing occurred.
Sha'rigu'l Payzullaeva recalls an interesting encounter at the Regional Studies Museum in No'kis during the month of August 1988. Thirty-eight
elderly men turned up together to visit the Museum. Each wore a different kind of headdress, some with different sorts of taqıya,
others with their heads wrapped in a double kerchief. They introduced themselves as Karakalpaks from Jarqorghan rayon in Surkhandarya
viloyati, just north of the Afghan border. One of them said "Oh daughter, we are getting old now. We decided to come here to see our
homeland before we die."
During their visit to the Museum they said that they would travel to Qon'ırat rayon the following day. Sha'rigu'l was curious to know why
they specifically wanted to visit Qon'ırat. They explained that it was because most of the men were from the Qon'ırat clan.
One of the men introduced himself to Sha'rigu'l: "My name is Mirzayusup Khaliyarov, the name of my clan is Qoldawlı. After discovering that
Sha'rigu'l was also Qoldawlı his eyes filled with tears and he kissed her on the forehead.
Bowles, G. T., The People of Asia, Weidenfeld and Nicolson, London, 1977.
Comas, D., Calafell, F., Mateu, E., Pérez-Lezaun, A., Bosch, E., Martínez-Arias, R., Clarimon, J., Facchini, F.,
Fiori, G., Luiselli, D., Pettener, D., and Bertranpetit, J., Trading Genes along the Silk Road: mtDNA Sequences and
the Origin of Central Asian Populations, American Journal of Human Genetics, 63, pages 1824 to 1838, 1998.
Cavalli-Sforza, L. L., Menozzi, P., and Piazza, A., The History and Geography of Human Genes, Princeton University Press,
Chaix, R., Austerlitz, F., Khegay, T., Jacquesson, S., Hammer, M. F., Heyer, E., and Quintana-Murci, L., The Genetic or
Mythical Ancestry of Descent Groups: Lessons from the Y Chromosome, American Journal of Human Genetics, Volume 75, pages
1113 to 1116, 2004.
Chaix, R., Quintana-Murci, L., Hegay, T., Hammer, M. F., Mobasher, Z., Austerlitz, F., and Heyer, E., From Social to Genetic
Structures in Central Asia, Current Biology, Volume 17, Issue 1, pages 43 to 48, 9 January 2007.
Comas, D., Plaza, S., Spencer Wells, R., Yuldaseva, N., Lao, O., Calafell, F., and Bertranpetit, J., Admixture, migrations,
and dispersals in Central Asia: evidence from maternal DNA lineages, European Journal of Human Genetics, pages 1 to 10, 2004.
Heyer, E., Central Asia: A common inquiry in genetics, linguistics and anthropology, Presentation given at the conference
entitled "Origin of Man, Language and Languages", Aussois, France, 22-25 September, 2005.
Heyer, E., Private communications to the authors, 14 February and 17 April, 2006.
Krader, L., Peoples of Central Asia, The Uralic and Altaic Series, Volume 26, Indiana University, Bloomington, 1971.
Passarino, G., Semino, O., Magri, C., Al-Zahery, N., Benuzzi, G., Quintana-Murci, L., Andellnovic, S., Bullc-Jakus, F., Liu, A.,
Arslan, A., and Santachiara-Benerecetti, A., The 49a,f Haplotype 11 is a New Marker of the EU19 Lineage that Traces Migrations
from Northern Regions of the Black Sea, Human Immunology, Volume 62, pages 922 to 932, 2001.
Payzullaeva, Sh., Numerous Karakalpaks, many of them! [in Karakalpak], Karakalpakstan Publishing, No'kis, 1995.
Pérez-Lezaun, A., Calafell, F., Comas, D., Mateu, E., Bosch, E., Martínez-Arias, R., Clarimón, J., Fiori, G.,
Luiselli, D., Facchini, F., Pettener, D., and Bertranpetit, J., Sex-Specific Migration Patterns in Central Asian Populations,
Revealed by Analysis of Y-Chromosome Short Tandem Repeats and mtDNA, American Journal of Human Genetics, Volume 65, pages 208
to 219, 1999.
Spencer Wells, R., The Journey of Man, A Genetic Odyssey, Allen Lane, London, 2002.
Spencer Wells, R., et al, The Eurasian Heartland: A continental perspective on Y-chromosome diversity, Proceedings
of the National Academy of Science, Volume 98, pages 10244 to 10249, USA, 28 August 2001.
Underwood, J. H., Human Variation and Human Micro-Evolution, Prentice-Hall Inc., New Jersey, 1979.
Underwood, P. A., et al, Detection of Numerous Y Chromosome Biallelic Polymorphisms by Denaturing High-Performance
Liquid Chromatography, Genome Research, Volume 7, pages 996 to 1005, 1997.
Zerjal, T., Spencer Wells, R., Yuldasheva, N., Ruzibakiev, R., and Tyler-Smith, C., A Genetic Landscape Reshaped by Recent
Events: Y Chromosome Insights into Central Asia, American Journal of Human Genetics, Volume 71, pages 466 to 482, 2002.
Visit our sister site www.qaraqalpaq.com, which uses the correct transliteration, Qaraqalpaq, rather than the
Russian transliteration, Karakalpak.
Return to top of page | http://www.karakalpak.com/genetics.html | 13 |
14 | World's Columbian Exposition
The World's Columbian Exposition (the official shortened name for the World's Fair: Columbian Exposition, also known as The Chicago World's Fair) was a World's Fair held in Chicago in 1893 to celebrate the 400th anniversary of Christopher Columbus' arrival in the New World in 1492. The iconic centerpiece of the Fair, the large pond of water, was there to represent the long voyage Columbus took to the New World. Chicago bested New York City; Washington, D.C.; and St. Louis for the honor of hosting the fair. The fair was an influential social and cultural event. The fair had a profound effect on architecture, sanitation, the arts, Chicago's self-image, and American industrial optimism. The Chicago Columbian Exposition was, in large part, designed by Daniel Burnham and Frederick Law Olmsted. It was the prototype of what Burnham and his colleagues thought a city should be. It was designed to follow Beaux Arts principles of design, namely French neoclassical architecture principles based on symmetry, balance, and splendor.
The exposition covered more than 600 acres (2.4 km2), featuring nearly 200 new (but purposely temporary) buildings of predominantly neoclassical architecture, canals and lagoons, and people and cultures from around the world. More than 27 million people attended the exposition during its six-month run. Its scale and grandeur far exceeded the other world fairs, and it became a symbol of the emerging American Exceptionalism, much in the same way that the Great Exhibition became a symbol of the Victorian era United Kingdom.
Dedication ceremonies for the fair were held on October 21, 1892, but the fairgrounds were not actually opened to the public until May 1, 1893. The fair continued until October 30, 1893. In addition to recognizing the 400th anniversary of the discovery of the New World by Europeans, the fair also served to show the world that Chicago had risen from the ashes of the Great Chicago Fire, which had destroyed much of the city in 1871. On October 9, 1893, the day designated as Chicago Day, the fair set a world record for outdoor event attendance, drawing 716,881 people to the fair.
Many prominent civic, professional, and commercial leaders from around the United States participated in the financing, coordination, and management of the Fair, including Chicago shoe tycoon Charles H. Schwab, Chicago railroad and manufacturing magnate John Whitfield Bunn, and Connecticut banking, insurance, and iron products magnate Milo Barnum Richardson, among many others.
Planning and organization
The fair was planned in the early 1890s, the Gilded Age of rapid industrial growth, immigration, and class tension. World's fairs, such as London's 1851 Crystal Palace Exhibition, had been successful in Europe as a way to bring together societies fragmented along class lines. However, the first American attempt at world's fair in 1876 in Philadelphia, though hugely successful in attendance, lost money. Nonetheless, ideas about marking the 400th anniversary of Columbus' landing started to take hold in the 1880s. Towards the end of the decade, civic leaders in St. Louis, New York City, Washington DC and Chicago expressed interest in hosting a fair, in order to generate profits, boost real estate values, and promote their cities. Congress was called on to decide the location. New York's financiers J. P. Morgan, Cornelius Vanderbilt, and William Waldorf Astor, among others, pledged $15 million to finance the fair if Congress awarded it to New York, while Chicagoans Charles T. Yerkes, Marshall Field, Philip Armour, Gustavus Swift, and Cyrus McCormick, offered to finance a Chicago fair. What finally persuaded Congress was Chicago banker Lyman Gage who raised several million additional dollars in a 24-hour period, over and above New York's final offer.
The exposition corporation and national exposition commission settled on Jackson Park as the fair site. Daniel H. Burnham was selected as director of works, and George R. Davis as director-general. Burnham emphasized architecture and sculpture as central to the fair and assembled the period's top talent to design the buildings and grounds including Frederick Law Olmsted for the grounds. The buildings were neoclassical, painted white, resulting in the name “White City” for the fair site.
Meanwhile Davis's team organized the exhibits with the help of G. Brown Goode of the Smithsonian. The Midway was inspired by the 1889 Paris Universal Exposition which included ethnological "villages". The Exposition's offices set up shop in the upper floors of the Rand McNally Building on Adams Street, the world's first all-steel-framed skyscraper.
The fair opened in May and ran through October 30, 1893. Forty-six nations participated in the fair (it was the first world's fair to have national pavilions), constructing exhibits and pavilions and naming national "delegates" (for example, Haiti selected Frederick Douglass to be its delegate). The Exposition drew nearly 26 million visitors.
The exposition was located in Jackson Park and on the Midway Plaisance on 630 acres (2.5 km2) in the neighborhoods of South Shore, Jackson Park Highlands, Hyde Park and Woodlawn. Charles H. Wacker was the Director of the Fair. The layout of the fairgrounds was created by Frederick Law Olmsted, and the Beaux-Arts architecture of the buildings was under the direction of Daniel Burnham, Director of Works for the fair. Renowned local architect Henry Ives Cobb designed several buildings for the exposition. The Director of the American Academy in Rome, Francis Davis Millet, directed the painted mural decorations. Indeed, it was a coming-of-age for the arts and architecture of the "American Renaissance", and it showcased the burgeoning neoclassical and Beaux-Arts styles.
White City
Most of the buildings of the fair were designed in the classical style of architecture. The area at the Court of Honor was known as The White City. The buildings were clad in white stucco, which, in comparison to the tenements of Chicago, seemed illuminated. It was also called the White City because of the extensive use of street lights, which made the boulevards and buildings usable at night. It included such buildings as:
- The Administration Building, designed by Richard Morris Hunt
- The Agricultural Building, designed by Charles McKim
- The Manufactures and Liberal Arts Building, designed by George B. Post. If this building were standing today, it would rank second in volume and third in footprint on List of largest buildings (130,000m2, 8,500,000m3).
- The Mines and Mining Building, designed by Solon Spencer Beman
- The Electricity Building, designed by Henry Van Brunt and Frank Maynard Howe
- The Machinery Building, designed by Robert Swain Peabody of Peabody and Stearns
- The Woman's Building, designed by Sophia Hayden
- The Transportation Building, designed by Adler & Sullivan
Louis Sullivan's polychrome proto-Modern Transportation Building was an outstanding exception to the prevailing style, as he tried to develop an organic American form. Years later, in 1922, he wrote that the classical style of the White City had set back modern American architecture by forty years.
As detailed in Erik Larson's popular history The Devil in the White City, extraordinary effort was required to accomplish the exposition, and much of it was unfinished on opening day. The famous Ferris Wheel, which proved to be a major attendance draw and helped save the fair from bankruptcy, was not finished until June, because of waffling by the board of directors the previous year on whether to build it. Frequent debates and disagreements among the developers of the fair added many delays. The spurning of Buffalo Bill's Wild West Show proved a serious financial mistake. Buffalo Bill set up his highly popular show next door to the fair and brought in a great deal of revenue that he did not have to share with the developers. Nonetheless, construction and operation of the fair proved to be a windfall for Chicago workers during the serious economic recession that was sweeping the country.
Early in July, a Wellesley College English teacher named Katharine Lee Bates visited the fair. The White City later inspired the reference to "alabaster cities" in her poem "America the Beautiful". The exposition was extensively reported by Chicago publisher William D. Boyce's reporters and artists. There is a very detailed and vivid description of all facets of this fair by the Persian traveler Mirza Mohammad Ali Mo'in ol-Saltaneh written in Persian. He departed from Persia on April 20, 1892, especially for the purpose of visiting the World's Columbian Exposition.
The fair ended with the city in shock, as popular mayor Carter Harrison, Sr. was assassinated by Patrick Eugene Prendergast two days before the fair's closing. Closing ceremonies were canceled in favor of a public memorial service. Jackson Park was returned to its status as a public park, in much better shape than its original swampy form. The lagoon was reshaped to give it a more natural appearance, except for the straight-line northern end where it still laps up against the steps on the south side of the Palace of Fine Arts/Museum of Science & Industry building. The Midway Plaisance, a park-like boulevard which extends west from Jackson Park, once formed the southern boundary of the University of Chicago, which was being built as the fair was closing (the university has since developed south of the Midway). The university's football team, the Maroons, were the original "Monsters of the Midway". The exposition is mentioned in the university's alma mater: "The City White hath fled the earth,/But where the azure waters lie,/A nobler city hath its birth,/The City Gray that ne'er shall die."
Role in the City Beautiful Movement
The White City is largely accredited for ushering in the City Beautiful movement and planting the seeds of modern city planning. The highly integrated design of the landscapes, promenades, and structures provided a vision of what is possible when planners, landscape architects, and architects work together on a comprehensive design scheme. The White City inspired cities to focus on the beautification of the components of the city in which municipal government had control; streets, municipal art, public buildings and public spaces. The designs of the City Beautiful Movement (closely tied with the municipal art movement) are identifiable by their classical architecture, plan symmetry, picturesque views, axial plans, as well as their magnificent scale. Where the municipal art movement focused on beautifying one feature in a City, the City Beautiful movement began to make improvements on the scale of the district. The White City of the World's Columbian Exposition inspired the Merchant's Club of Chicago to commission Daniel Burnham to create the Plan of Chicago in 1909, which became the first modern comprehensive city plan in America.
Surviving structures
Almost all of the fair's structures were designed to be temporary; of the more than 200 buildings erected for the fair, the only two which still stand in place are the Palace of Fine Arts and the World's Congress Auxiliary Building. From the time the fair closed until 1920, the Palace of Fine Arts housed the Field Columbian Museum (now the Field Museum of Natural History, since relocated); in 1933, the Palace building re-opened as the Museum of Science and Industry. The second building, the World's Congress Building, was one of the few buildings not built in Jackson Park, instead it was built downtown in Grant Park. The cost of construction of the World's Congress Building was shared with the Art Institute of Chicago, which, as planned, moved into the building (the museum's current home) after the close of the fair.
Three other significant buildings survived the fair. The first is the Norway pavilion, a recreation of a traditional wooden stave church which is now preserved at a museum called Little Norway in Blue Mounds, Wisconsin. The second is the Maine State Building, designed by Charles Sumner Frost, which was purchased by the Ricker family of Poland Spring, Maine. They moved the building to their resort to serve as a library and art gallery. The Poland Spring Preservation Society now owns the building, which was listed on the National Register of Historic Places in 1974. The third is the Dutch House, which was moved to Brookline, Massachusetts.
The main altar at St. John Cantius in Chicago, as well as its matching two side altars, are reputed to be from the Columbian Exposition.
Since many of the other buildings at the fair were intended to be temporary, they were removed after the fair. Their facades were made not of stone, but of a mixture of plaster, cement and jute fiber called staff, which was painted white, giving the buildings their "gleam". Architecture critics derided the structures as "decorated sheds". The White City, however, so impressed everyone who saw it (at least before air pollution began to darken the façades) that plans were considered to refinish the exteriors in marble or some other material. In any case, these plans were abandoned in July 1894 when much of the fair grounds was destroyed in a fire, thus assuring their temporary status.
Electricity at the fair
The International Exposition was held in a building which was devoted to electrical exhibits. General Electric Company (backed by Thomas Edison and J.P. Morgan) had proposed to power the electric exhibits with direct current originally at the cost of US$1.8 million. After this was initially rejected as exorbitant, General Electric re-bid their costs at $554,000. However, Westinghouse proposed using its alternating current system to illuminate the Columbian Exposition in Chicago for $399,000, and Westinghouse won the bid. It was a key event in what has been called the War of the currents, an early demonstration in America of the safety and reliability of alternating current.
All the exhibits were from commercial enterprises. Thomas Edison, Brush, Western Electric, and Westinghouse had exhibits. There were many demonstrations of electrical devices developed by Nikola Tesla. These included high-frequency high-voltage lighting that produced more efficient light with less heat, a two-phase induction motor, and generators to power the system. Tesla demonstrated a series of electrical effects in a lecture he had previously been performing throughout America and Europe. This included using high-voltage, high-frequency alternating current to light a wireless gas-discharge lamp and shooting lightning from his fingertips.
General Electric banned the use of Edison's lamps in Westinghouse's plan in retaliation for losing the bid. Westinghouse's company quickly designed a double-stopper lightbulb (sidestepping Edison's patents) and was able to light the fair. The Westinghouse lightbulb was invented by Reginald Fessenden, later to be the first person to transmit voice by radio. Fessenden replaced Edison's delicate platinum lead-in wires with an iron-nickel alloy, thus greatly reducing the cost and increasing the life of the lamp.
The Westinghouse Company displayed several polyphase systems. The exhibits included a switchboard, polyphase generators, step-up transformers, transmission line, step-down transformers, commercial size induction motors and synchronous motors, and rotary direct current converters (including an operational railway motor). The working scaled system allowed the public a view of a system of polyphase power which could be transmitted over long distances, and be utilized, including the supply of direct current. Meters and other auxiliary devices were also present.
Also at the Fair, the Chicago Athletic Association Football team played one of the very first night football games against West Point (the earliest being on September 28, 1892 between Mansfield State Normal and Wyoming Seminary). Chicago won the game 14-0. The game lasted only 40 minutes, compared to the normal 90 minutes.
The World's Columbian Exposition was the first world's fair with an area for amusements that was strictly separated from the exhibition halls. This area, developed by a young music promoter, Sol Bloom, concentrated on Midway Plaisance and introduced the term "midway" to American English to describe the area of a carnival or fair where sideshows are located.
It included carnival rides, among them the original Ferris Wheel, built by George Ferris. This wheel was 264 feet (80 m) high and had 36 cars, each of which could accommodate 60 people. The importance of the Columbian Exposition is highlighted by the use of "Rueda de Chicago" (Chicago Wheel) in many Latin American countries such as Costa Rica and Chile in reference to the Ferris Wheel. One attendee, George C. Tilyou, later credited the sights he saw on the Chicago midway for inspiring him to create America's first major amusement park, Steeplechase Park in Coney Island, NY.
Eadweard Muybridge gave a series of lectures on the Science of Animal Locomotion in the Zoopraxographical Hall, built specially for that purpose on Midway Plaisance. He used his zoopraxiscope to show his moving pictures to a paying public. The hall was the first commercial movie theater.
The "Street in Cairo" included the popular dancer known as Little Egypt. She introduced America to the suggestive version of the belly dance known as the "hootchy-kootchy", to a tune said to be improvised by Sol Bloom (and now more commonly associated with snake charmers) which he had made as an improvisation when his dancers had no music to dance to. Bloom did not copyright the song, putting it straight into the public domain.
Music at the fair
Black musicians
- Joseph Douglass ~ Classical violinist, who achieved wide recognition after his performance there and became the first African-American violinist to conduct a transcontinental tour and the first to tour as a concert violinist.
Other music and musicians
- The first Indonesian music performance in the United States was at the exposition.
- A group of hula dancers led to increased awareness of Hawaiian music among Americans throughout the country.
- Stoughton Musical Society, the oldest choral society in the United States, presented the first concerts of early American music at the exposition.
- The first Eisteddfod (a Welsh choral competition with a history spanning many centuries) held outside of Wales was held in Chicago at the exposition.
- August 12, 1893 – Antonín Dvořák conducts gala "Bohemian Day" concert at Chicago World's Columbian Exposition of 1893, besieged by visitors including the conductor of the Chicago Symphony, who arranges for performance of "American" String Quartet, just completed in Spillville, Iowa, during a Dvorak family vacation in a Czech-speaking community there. .
Non-musical attractions
Although denied a spot at the fair, Buffalo Bill Cody decided to come to Chicago anyway, setting up his Wild West show just outside the edge of the exposition. Historian Frederick Jackson Turner gave academic lectures reflecting on the end of the frontier which Buffalo Bill represented.
The Electrotachyscope of Ottomar Anschütz was demonstrated, which used a Geissler Tube to project the illusion of moving images. Louis Comfort Tiffany made his reputation with a stunning chapel designed and built for the Exposition. This chapel has been carefully reconstructed and restored. It can be seen in at the Charles Hosmer Morse Museum of American Art.
Architect Kirtland Cutter's Idaho Building, a rustic log construction, was a popular favorite, visited by an estimated 18 million people. The building's design and interior furnishings were a major precursor of the Arts and Crafts movement.
The John Bull locomotive was displayed. It was only 62 years old, having been built in 1831. It was the first locomotive acquisition by the Smithsonian Institution. The locomotive ran under its own power from Washington, DC, to Chicago to participate, and returned to Washington under its own power again when the exposition closed. In 1981 it was the oldest surviving operable steam locomotive in the world when it ran under its own power again.
An original frog switch and portion of the superstructure of the famous 1826 Granite Railway in Massachusetts could be viewed. This was the first commercial railroad in the United States to evolve into a common carrier without an intervening closure. The railway brought granite stones from a rock quarry in Quincy, Massachusetts, so that the Bunker Hill Monument could be erected in Boston. The frog switch is now on public view in East Milton Square, Massachusetts, on the original right-of-way of the Granite Railway.
Norway participated by sending the Viking, a replica of the Gokstad ship. It was built in Norway and sailed across the Atlantic by 12 men, led by Captain Magnus Andersen. In 1919 this ship was moved to Lincoln Park. It was relocated in 1996 to Good Templar Park in Geneva, Illinois, where it awaits renovation.
The 1893 Parliament of the World’s Religions, which ran from September 11 to September 27, marked the first formal gathering of representatives of Eastern and Western spiritual traditions from around the world. According to Eric J. Sharpe, Tomoko Masuzawa, and others, the event was considered radical at the time, since it allowed non-Christian faiths to speak on their own behalf; it was not taken seriously by European scholars until the 1960s.
Visitors to the Louisiana Pavilion were each given a seeding of a cypress tree. This resulted in the spread of cypress trees to areas where they were not native. Cypress trees from those seedings can be found in many areas of West Virginia, where they flourish in the climate.
Along the banks of the lake, patrons on the way to the casino were taken on a moving walkway the first of its kind open to the public, called The Great Wharf, Moving Sidewalk, it allowed people to walk along or ride in seats.
The German firm Krupp had a pavilion of artillery, which apparently had cost one million dollars to stage, including a coastal gun of 42 cm in bore (16.54 inches) and a length of 33 calibres (45.93 feet, 14 meters). A breach loaded gun, it weighed 120.46 long tons (122.4 metric tons). According to the company's marketing: "It carried a charge projectile weighing from 2,200 to 2,500 pounds which, when driven by 900 pounds of brown powder, was claimed to be able to penetrate at 2,200 yards a wrought iron plate three feet thick if placed at right angles." Nicknamed "The Thunderer", the gun had an advertised range of 15 miles; on this occasion John Schofield declared Krupps' guns "the greatest peacemakers in the world". This gun was later seen as a precursor of the company's World War I Dicke Berta howitzers.
Notable firsts at the fair
||This section needs additional citations for verification. (March 2011)|
- Frederick Jackson Turner lectured on his Frontier thesis
- Contribution to Chicago's nickname, the "Windy City". Some argue that Charles Anderson Dana of the New York Sun coined the term related to the hype of the city's promoters. Other evidence, however, suggests the term was used as early as 1881 in relation to either Chicago's "windbag" politicians or to its weather.
- United States Mint offered its first commemorative coins: a quarter and half dollar
- The United States Post Office Department produced its first picture postcards and Commemorative stamp set
Edibles and potables
- F.W. Rueckheim introduced a confection of popcorn, peanuts and molasses that was given the name Cracker Jack in 1896
- Cream of Wheat
- Milton Hershey bought a European exhibitor's chocolate manufacturing equipment and added chocolate products to his caramel manufacturing business
- Juicy Fruit gum
- Pabst Blue Ribbon
- Quaker Oats
- Shredded Wheat
Inventions and manufacturing advances
- The "clasp locker," a clumsy slide fastener and forerunner to the zipper was demonstrated by Whitcomb L. Judson
- Elongated coins, (the squashed penny)
- Ferris Wheel
- First fully electrical kitchen including an automatic dishwasher
- Phosphorescent lamps (a precursor to fluorescent lamps)
- John T. Shayne & Company, the local Chicago furrier helped America gain respect on the world stage of manufacturing
- To hasten the painting process during construction of the fair in 1892, Francis Davis Millet invents spray painting
- A device that made plates for printing books in Braille, unveiled by Frank Haven Hall, who met Helen Keller and her teacher at the exhibit.
- Congress of Mathematicians, precursor to International Congress of Mathematicians
- Interfaith dialogue (the Parliament of the World’s Religions)
- The poet and humorist Benjamin Franklin King, Jr. first performed at the exposition.
Later years
The exposition was one influence leading to the rise of the City Beautiful movement. Results included grand buildings and fountains built around Olmstedian parks, shallow pools of water on axis to central buildings, larger park systems, broad boulevards and parkways and, after the start of the 20th century, zoning laws and planned suburbs. Examples of the City Beautiful movement's works include the City of Chicago, the Columbia University campus, and the National Mall in Washington D.C.
After the fair closed, J.C. Rogers, a banker from Wamego, Kansas, purchased several pieces of art that had hung in the rotunda of the U.S. Government Building. He also purchased architectural elements, artifacts and buildings from the fair. He shipped his purchases to Wamego. Many of the items, including the artwork, were used to decorate his theater, now known as the Columbian Theatre.
Memorabilia saved by visitors can still be purchased. Numerous books, tokens, published photographs, and well-printed admission tickets can be found. While the higher value commemorative stamps are expensive, the lower ones are quite common. So too are the commemorative half dollars, many of which went into circulation.
When the exposition ended the Ferris Wheel was moved to Chicago's north side, next to an exclusive neighborhood. An unsuccessful Circuit Court action was filed against the owners of the wheel to have it moved. The wheel stayed there until it was moved to St. Louis for the 1904 World's Fair.
See also
- List of world expositions
- Benjamin W. Kilburn, stereoscopic view concession and subsequent views of the Colombian World's Exposition.
- Herman Webster Mudgett, serial killer associated with the 1893 World's Fair
- St. John Cantius in Chicago, whose main altar, as well as its matching two side altars, reputedly originate from the 1893 Columbian Exposition
- Spectacle Reef Light
- World's Largest Cedar Bucket
- Fairy lamp, candle sets popularized at Queen Victoria's Golden Jubilee were used to illuminate an island at the Expo
- Media about the fair
- 1893: A World's Fair Mystery, an interactive fiction by Peter Nepstad that recreates the Exposition in detail
- Devil in the White City, non-fiction book intertwining the true tales of the architect behind the 1893 World's Fair and a serial killer
- Expo: Magic of the White City, a documentary film about the exposition
- Jimmy Corrigan, the Smartest Kid on Earth, a graphic novel set in part at the Chicago World's Columbian Exposition of 1893
- Wonder of the Worlds, an adventure novel where Nikola Tesla, Mark Twain and Houdini pursue Martian agents who have stolen a powerful crystal from Tesla at the Columbia Exposition
- Truman, Benjamin (1893). History of the World's Fair: Being a Complete and Authentic Description of the Columbian Exposition From Its Inception. Philadelphia, PA: J. W. Keller & Co.
- Moses Purnell Handy, "The Official Directory of the World's Columbian Exposition, May 1st to October 30th, 1893: A Reference Book of Exhibitors and Exhibits, and of the Officers and Members of the World's Columbian Commission Books of the Fairs" (William B. Conkey Co., 1893) P. 75 (See: Google Books). See also: Memorial Volume. Joint Committee on Ceremonies, Dedicatory And Opening Ceremonies of the World's Columbian Exposition: Historical and Descriptive, A. L. Stone: Chicago, 1893. P. 306.
- "Municipal Flag of Chicago". Chicago Public Library. 2009. Retrieved 2009-03-04.
- "World's Columbian Exposition", Encyclopedia of Chicago
- Birgit Breugal for the EXPO2000 Hannover GmbH Hannover, the EXPO-BOOK The Official Catalogue of EXPO2000 with CDROM
- Rydell, Robert W. (1987).All the World's a Fair: Visions of Empire at American International Expositions, p. 53. University of Chicago. ISBN 0-226-73240-1.
- Larson, Erik (2003). The Devil in the White City: Murder, Magic and Madness at the Fair that Changed America. New York, NY: Crown. ISBN 0-609-60844-4.
- Sullivan, Louis (1924). Autobiography of an Idea. New York City: Press of the American institute of Architects, Inc.. p. 325.
- "Falmouth Museums on the Green", Falmouth Historical Society
- Petterchak 2003, pp. 17–18
- Muʿīn al-Salṭana, Muḥammad ʿAlī (Hāǧǧ Mīrzā), Safarnāma-yi Šīkāgū : ḵāṭirāt-i Muḥammad ʿAlī Muʿīn al-Salṭana bih Urūpā wa Āmrīkā : 1310 Hiǧrī-yi Qamarī / bih kūšiš-i Humāyūn Šahīdī, [Tihrān] : Intišrāt-i ʿIlmī, 1984, 1363/.
- Levy, John M. (2009) Contemporary Urban Planning.
- About The Museum - Museum History - Museum of Science and Industry, Chicago, USA
- David J. Bertuca, Donald K. Hartman, Susan M. Neumeister, The World's Columbian Exposition: A Centennial Bibliographic Guide, page xxi
- John W. Klooster, Icons of Invention: The Makers of the Modern World from Gutenberg to Gates, page 307
- Margaret Cheney, Tesla: Man Out of Time, page 76
- Margaret Cheney, Tesla: Man Out of Time, page 79
- US Patent 453,742 dated 9 June 1891
- Pruter, Robert (2005). "Chicago Lights Up Football World". LA 4 Foundation. XVIII (II): 7–10.
- Harper, Douglas. "midway". Chicago Manual Style (CMS). Online Etymology Dictionary. Retrieved 12 April 2013.
- Clegg, Brian (2007). The Man Who Stopped Time. Joseph Henry Press. ISBN 0-309-10112-3.
- "The World's Columbian Exposition (1893)". The American Experience. PBS. 1999. Retrieved 2009-12-21.
- Adams, Cecil (2007-02-27). "What is the origin of the song "There's a place in France/Where the naked ladies dance?" Are bay leaves poisonous?". The Straight Dope. Retrieved 2009-12-21.
- Southern, pg. 283
- Caldwell Titcomb (Spring 1990). "Black String Musicians: Ascending the Scale". Black Music Research Journal (Center for Black Music Research - Columbia College Chicago and University of Illinois Press) 10 (1): 107–112. doi:10.2307/779543. JSTOR 779543.
- Terry Waldo (1991). This is Ragtime. Da Capo Press.
- Brunvand, Jan Harold (1998). "Christensen, Abigail Mandana ("Abbie") Holmes (1852-1938)". American folklore: an encyclopedia. Taylor & Francis. p. 142. ISBN 978-0-8153-3350-0.
- Diamond, Beverly; Barbara Benary. "Indonesian Music". The Garland Encyclopedia of World Music. pp. 1011–1023.
- Stillman, Amy Ku'uleialoha. "Polynesian Music". The Garland Encyclopedia of World Music. pp. 1047–1053.
- Credit: Dvorak Museum Heritage Association article "Dvorak in America" http://www.dvoraknyc.org/Dvorak_in_America.html. Text in this and related sections adapted from Maurice Peress, "Dvorak to Duke Ellington: A Conductor Explores America’s Music and Its African American Roots" (New York: Oxford University Press, 2004).
- HistoryLink Essay: Cutter, Kirtland Kelsey
- Arts & Crafts Movement Furniture
- Nepstad, Peter. "The Viking Shop in Jackson Park" (pdf). Hyde Park Historical Society. Retrieved 2009-01-24.
- Smith, Gerry (2008-06-26). "Viking ship from 1893 Chicago world's fair begins much-needed voyage to restoration". Chicago Tribune (Tribune Company). Retrieved 2009-01-24.
- Masuzawa, Tomoko (2005). The Invention of World Religions. Chicago University of Chicago Press. pp. 270–274. ISBN 978-0-226-50989-1.
- "Kate McPhelim Cleary: A Gallant Lady Reclaimed" Lopers.net. Accessed October 6, 2008.
- Wonderful West Virginia magazine, August 2007 at pg. 6
- Bolotin, Norman, and Christine Laing. The World's Columbian Exposition: the Chicago World's Fair of 1893. Chicago: University of Illinois Press, 2002.
- Chaim M. Rosenberg (2008). America at the fair: Chicago's 1893 World's Columbian Exposition. Arcadia Publishing. pp. 229–230. ISBN 978-0-7385-2521-1.
- John Birkinbine (1893) "Prominent Features of the World's Columbian Exposition", Engineers and engineering, Volume 10, p. 292; for the metric values see Ludwig Beck (1903). Die geschichte des eisens in technischer und kulturgeschiehtlicher beziehung: abt. Das XIX, jahrhundert von 1860 an bis zum schluss. F. Vieweg und sohn. p. 1026.
- Hermann Schirmer (1937). Das Gerät der Artillerie vor, in und nach dem Weltkrieg: Das Gerät der schweren Artillerie. Bernard & Graefe. p. 132. "Der Schritt von einer kurze 42-cm-Kanone L/33 zu einer Haubitze mit geringerer Anfangsgeschwindigkeit und einem um etwa 1/5 geringeren Geschossgewicht war nich sehr gross."
- Robert de Boer (2009) Alexander Macfarlane in Chicago, 1893 from WebCite
- Talen, Emily (2005).New Urbanism and American Planning: The Conflict of Cultures, p. 118. Routledge. ISBN 0-415-70133-3.
- Crawford, Richard (2001). America's Musical Life: A History. W. W. Norton & Company. ISBN 0-393-04810-1.
- Southern, Eileen (1997). Music of Black Americans. New York: W.W. Norton & Co. ISBN 0-393-03843-2.
- Petterchak, Janice A. (2003). Lone Scout: W. D. Boyce and American Boy Scouting. Rochester, Illinois: Legacy Press. ISBN 0-9653198-7-3.
- Neuberger, Mary. 2006. "To Chicago and Back: Alecko Konstantinov, Rose Oil, and the Smell of Modernity" in Slavic Review, Fall 2006.
Further reading
|About World's Columbian Exposition|
- Appelbaum, Stanley (1980). The Chicago World's Fair of 1893. New York: Dover Publications, Inc. ISBN 0-486-23990-X
- Arnold, C.D. Portfolio of Views: The World's Columbian Exposition. National Chemigraph Company, Chicago & St. Louis, 1893.
- Bancroft, Hubert Howe. The Book of the Fair: An Historical and Descriptive Presentation of the World's Science, Art and Industry, As Viewed through the Columbian Exposition at Chicago in 1893. New York: Bounty, 1894.
- Barrett, John Patrick, Electricity at the Columbian Exposition. R.R. Donnelley, 1894.
- Bertuca, David, ed. "World's Columbian Exposition: A Centennial Bibliographic Guide". Westport, CT: Greenwood Press, 1996. ISBN 0-313-26644-1
- Buel, James William. The Magic City. New York: Arno Press, 1974. ISBN 0-405-06364-4
- Burg, David F. Chicago's White City of 1893. Lexington, KY: The University Press of Kentucky, 1976. ISBN 0-8131-0140-9
- Dybwad, G. L., and Joy V. Bliss, "Annotated Bibliography: World's Columbian Exposition, Chicago 1893." Book Stops Here, 1992. ISBN 0-9631612-0-2
- Eagle, Mary Kavanaugh Oldham, d. 1903, ed. The Congress of Women: Held in the Woman's Building, World's Columbian Exposition, Chicago, U. S. A., 1893, With Portraits, Biographies and Addresses. Chicago: Monarch Book Company, 1894.
- Elliott, Maud Howe, 1854–1948, ed. Art and Handicraft in the Woman's Building of the World's Columbian Exposition, Chicago, 1893. Chicago and New York: Rand, McNally and Co., 1894.
- Glimpses of the World's Fair: A Selection of Gems of the White City Seen Through A Camera, Laird & Lee Publishers, Chicago: 1893, accessed February 13, 2009.
- Larson, Erik. Devil in the White City: Murder, Magic, and Madness at the Fair That Changed America. New York: Crown, 2003. ISBN 0-375-72560-1.
- Photographs of the World's Fair: an elaborate collection of photographs of the buildings, grounds and exhibits of the World's Columbian Exposition with a special description of The Famous Midway Plaisance. Chicago: Werner, 1894.
- Reed, Christopher Robert. "All the World Is Here!" The Black Presence at White City. Bloomington: Indiana University Press, 2000. ISBN 0-253-21535-8
- Rydell, Robert, and Carolyn Kinder Carr, eds. Revisiting the White City: American Art at the 1893 World's Fair. Washington, D.C.: Smithsonian Institution, 1993. ISBN 0-937311-02-2
- Wells, Ida B. The Reason Why the Colored American Is Not in the World's Columbian Exposition: The Afro-American's Contribution to Columbian Literature. Originally published 1893. Reprint ed., edited by Robert W. Rydell. Champaign: University of Illinois Press, 1999. ISBN 0-252-06784-3
- World's Columbian Exposition (1893 : Chicago, Ill.). Board of Lady Managers. List of Books Sent by Home and Foreign Committees to the Library of the Woman's Building, World's Columbian Exposition, Chicago, 1893 by World's Columbian Exposition (1893 : Chicago, Ill.). Board of Lady Managers; edited by Edith E. Clarke. Chicago: n. pub., ca. 1894. Bibliography.
- Yandell, Enid. Three Girls in a Flat by Enid Yandell, Jean Loughborough and Laura Hayes. Chicago: Bright, Leonard and Co., 1892. Biographical account of women at the fair.
|Wikimedia Commons has media related to: World Columbian Exposition|
|Wikisource has original text related to this article:|
- The Columbian Exposition in American culture.
- Photographs of the 1893 Columbian Exposition
- Photographs of the 1893 Columbian Exposition from Illinois Institute of Technology
- Interactive map of Columbian Exposition
- Chicago Postcard Museum—A complete collection of the 1st postcards produced in the U.S. for the 1893 Columbian Exposition.
- "Expo: Magic of the White City," a documentary about the World's Columbian Exposition narrated by Gene Wilder
- A large collection of stereoviews of the fair
- The Winterthur Library Overview of an archival collection on the World's Columbian Exposition.
- Columbian Theatre History and information about artwork from the U.S. Government Building.
- Photographs and interactive map from the 1893 Columbian Exposition from the University of Chicago
- Video simulations from the 1893 Columbian Exposition from UCLA's Urban Simulation Team
- 1893 Columbian Exposition Concerts
- Edgar Rice Burroughs' Amazing Summer of '93 - Columbian Exposition
- International Eisteddfod chair, Chicago, 1893
- Photographs of the Exposition from the Hagley Digital Archives
- 1893 Chicago World Columbia Exposition: A Collection of Digitized Books from the University of Illinois at Urbana-Champaign
- Map of Chicago Columbian Exposition from the American Geographical Society Library
- Interactive Map of the Chicago Columbian Exposition, created in the Harvard Worldmap Platform
Exposition Universelle (1889)
Brussels International (1897) | http://en.wikipedia.org/wiki/World_Columbian_Exposition | 13 |
25 | The term real wages
refers to wages that have been adjusted for inflation
In economics, inflation is a rise in the general level of prices of goods and services in an economy over a period of time.When the general price level rises, each unit of currency buys fewer goods and services. Consequently, inflation also reflects an erosion in the purchasing power of money – a...
. This term is used in contrast to nominal wages or unadjusted wages. Real wages provide a clearer representation of an individual's wages.
The use of adjusted figures is used in undertaking some forms of economic analysis. For example, in order to report on the relative economic successes of two nations, real wage figures are much more useful than nominal figures.
If nominal figures are used in an analysis, then statements may be incorrect. A report could state: 'Country A is becoming wealthier each year than Country B because its wage levels are rising by an average of $500 compared to $250 in Country B'. However, the conclusion that this statement draws could be false if the values used are not adjusted for inflation. An inflation rate of 100 percent in Country A will result in its citizens becoming rapidly poorer than those of Country B where inflation is only 2 percent. Taking inflation into account, the conclusion is quite different: 'Despite nominal wages in Country A rising faster than those in Country B, real wages are falling significantly as the currency halves in value each year'.
The importance of considering real wages also appears when looking at the history of a single country. If only nominal wages are considered, the conclusion has to be that people used to be a great deal poorer than today. The cost of living was also much lower. In order to have an accurate view of a nation's wealth in any given year, inflation has to be taken into account — and thus using real wages as the measuring stick.
Real wages are a useful economic measure, as opposed to nominal wages, which simply show the monetary value of wages in that year. However, real wages does not take into account other compensation like benefits or old age pensions.
Consider an example economy with the following wages over three years:
- Year 1: $20,000
- Year 2: $20,400
- Year 3: $20,808
Real Wage = W/P (W= wage, P= i, inflation, can also be subjugated as interest)
Also assume that the inflation in this economy is 2 percent p.a. These figures have very different meanings depending on whether they are real wages
or nominal wages
If the figures that are shown are real wages, then it can be determined that wages have increased by 2 percent after inflation has been taken into account. In effect, an individual making this wage actually has more money than the previous year.
However, if the figures that are shown are nominal wages then the wages are not really increasing at all. In absolute dollar amounts, an individual is bringing home more money each year, but the increases in inflation actually zeroes out the increases in their salary. Given that inflation is increasing at the same pace as wages, an individual cannot actually afford to increase their consumption
Consumption is a common concept in economics, and gives rise to derived concepts such as consumer debt. Generally, consumption is defined in part by comparison to production. But the precise definition can vary because different schools of economists define production quite differently... | http://www.absoluteastronomy.com/topics/Real_wage | 13 |
19 | Science Fair Project Encyclopedia
Treaty establishing a Constitution for Europe
The Treaty establishing a Constitution for Europe, commonly referred to as the European Constitution, is an international treaty signed in 2004 and currently awaiting ratification, intended to create a constitution for the European Union. Its main aims are to replace the overlapping set of existing treaties that comprise the Union's current constitution, and to streamline decision-making in what is now a 25-member organisation. Despite its name, it only covers the European Union, not the whole of Europe in the geographical sense.
The Constitution was drafted by the European Convention, convened for the purpose as a result of the Laeken Declaration of 2001. The Convention published its draft in July 2003, and ensuing negotiations between member states, which were often fraught, ended with agreement on a final document the following June. The constitutional treaty was signed on October 29, 2004, and now awaits ratification by all member states. The treaty is scheduled to enter into force on November 1, 2006 provided that is ratified by all 25 member states of the Union. Critically, this will be subject to a referendum in ten countries.
4.1 Length and complexity
History and ratification
Main article: History of the European Constitution
The Constitution is based on the EU's two primary existing treaties, the Treaty of Rome of 1957 and the Maastricht treaty of 1992, as modified by the more recent treaties of Amsterdam (1997) and Nice (2001). The need to consolidate the EU's constitution was highlighted in the text of the Treaty of Nice, and the process was begun following the Laeken declaration in December 2001, when the European Convention was established to produce a draft of the Constitution, which was eventually published in July 2003. After protracted negotiations during which disputes arose over the proposed framework for qualified majority voting, the final text of the proposed Constitution was agreed upon in June 2004.
The constitutional treaty was signed in a ceremony at Rome on October 29, 2004. Before it enters into force, however, it must also be ratified by each member state. This process is likely to take around two years to complete. Ratification takes different forms in each country, depending on its traditions, constitutional arrangements, and political processes. Lithuania, Hungary, Slovenia, Italy and Greece have already completed parliamentary ratification of the treaty. In addition the European Parliament has also approved the treaty by a huge majority (in a symbolic rather than a binding vote). Ten of the 25 member states have announced their intention to hold a referendum on the subject, one of which has now taken place. In some cases, the result will be legally binding; in others it will be consultative:
|Ratification of the Treaty via referenda|
|20 February, 2005||76.7% Yes||Consultative Referendum|
|29 May, 2005||Referendum|
|1 June, 2005||Consultative Referendum|
|10 July, 2005||Referendum|
|25 September, 2005(proposed date)||Referendum|
|27 September, 2005||Referendum|
|December 2005 (proposed)||Referendum|
|Parliamentary approval of the Treaty|
|11 November, 2004||Yes. 84 to 4 in favour.|
|20 December, 2004||Yes. 322 to 12 in favour.|
|European Parliament||12 January, 2005||Yes. 500 to 137 in favour.|
|1 February, 2005||Yes. 79 to 4 in favour.|
|6 April, 2005||Yes. Lower house: 436 to 28 in favour. Upper house: 217 to 16 in favour.|
|19 April, 2005||Yes. 268 to 17 in favour.|
|(Expected) April 2005|
|12 May, 2005|
|(Expected end of) May 2005|
|(Expected end of) May 2005|
|(Expected end of) May 2005|
|(Expected) July 2005|
|(Expected end of) December 2005|
|(Expected end of) December 2005|
Strengthened or newly codified provisions
Functioning of the Union
- The principle of conferral
The Constitution specifies that the EU is a union of member states, and that all its competences (areas of responsibility) are voluntarily conferred on it by its member states according to the principle of conferral. The EU has no competences by right, and thus any areas of policy not explicitly specified in the Constitution remain the domain of the sovereign member states (notwithstanding the ‘flexibility clause' – see below).
This is explicitly specified for the first time, but since the Union has always been a treaty-based organisation, it has always been the case by default under international law.
- The principle of subsidiarity
According to the Constitution, the EU may only act (i.e. make laws) where its member states agree unanimously that actions by individual countries would be insufficient. This is the principle of subsidiarity, and is based on the legal and political principle that governmental decisions should be taken as close to the people as possible while still remaining effective.
- The principle of proportionality
In all areas, the EU may only act to exactly the extent that is needed to achieve its objectives (the principle of proportionality).
- Obligations of member states
Member states have constitutional obligations. Since the Constitution has the legal status of a treaty, these obligations have the legal status of treaty obligations. They are:
- to ensure implementation at national level of what is decided at EU level;
- to support the EU in achieving its tasks;
- not to jeopardise shared EU objectives.
- Primacy of Union law
In accordance with the norms of international law among European countries, EU law has primacy over the laws of member states in the areas where member states allow it to legislate. In other words, no member state may pass a national law which is incompatible with an agreement already made at European level.
This principle has been the case since the Community was founded in 1957. It is the principle from which the judgements of the European Court of Justice derive their legitimacy.
- Mutual values of the Union's member states
As stated in Articles I-1 and I-2 , the Union is open to all European States which respect the following common values:
Member states also declare that the following principles prevail in their society:
These provisions are not new, but some of them are codified for the first time.
- Aims of the Union
The aims of the EU are made explicit (Article I-3 ):
- promotion of peace, its values and the well-being of its peoples
- maintenance of freedom, security and justice without internal frontiers, and an internal market where competition is free and undistorted
- sustainable development based on balanced economic growth and price stability, a highly competitive social market economy
- social justice and protection, equality between women and men, solidarity between generations and protection of the rights of the child
- economic, social and territorial cohesion, and solidarity among member states
- respect for linguistic and cultural diversity
In its relations with the wider world the Union's objectives are:
- to uphold and promote its values and interests
- to contribute to peace, security, the sustainable development of the Earth
- solidarity and mutual respect among peoples
- free and fair trade
- eradication of poverty and the protection of human rights, in particular the rights of the child
- strict observance and development of international law, including respect for the principles of the United Nations Charter.
Scope of the Union
The EU has six exclusive competences. These are policy areas in which member states have agreed that they should act exclusively through the EU and not legislate at a national level at all. The list remains unchanged from the previous treaties:
There are a number of shared competences. These are areas in which member states agree to act individually only where they have not already acted through the EU, or where the EU has ceased to act (though there are a few areas where member states may act both nationally and through the EU if they wish). The list of areas is mostly unchanged from previous treaties, with three new competences added (see below).
There are a number of areas where the EU may only take supporting, coordinating or complementary action. In these areas, member states do not confer any competences on the Union, but they agree to act through the Union in order to support their work at national level. Again, the list of areas is mostly unchanged from previous treaties, with three new competences added (see below).
- Flexibility clause
The Constitution's flexibility clause allows the EU to act in areas not made explicit in the Constitution, but:
- only if all member states agree;
- only with the consent of the European Parliament; and
- only where this is necessary to achieve an agreed objective under the Constitution.
This clause has been present in EU law since the original Treaty of Rome, which established the EEC in 1958. It is designed to allow EU countries to develop new areas of co-operation without needing to go through the process of a full treaty revision.
- Common foreign and security policy
The EU is charged with defining and implementing a common foreign and security policy in due time. The wording of this article is taken directly from the existing Treaty on European Union, with no new provisions.
Main article: Institutions of the European Union
The institutional structure of the Union is unchanged. The Council of the European Union is now formally renamed as the 'Council of Ministers', which had already been its informal title. The "General Affairs Council" is formally split from the "Foreign Affairs Council". (previously the "General Affairs and External Relations" configuration was technically a single formation, but since June 2002, they already held separate meetings).
- Symbols of the Union
Main article: European symbols
- Dialogues with civic society
According to the Constitution, the EU maintains a dialogue with churches and non-confessional organisations.
Scope of the Union
- Legal personality
The European Union has legal personality under the Constitution. This means that it is able to represent itself as a single body in certain circumstances under international law. Most significantly, it is able to sign treaties as a single body where all its member states agree.
This provision is not new in one sense, since the European Community has always had legal personality. But the parallel Community and Union structures are now merged and simplified as a single entity, so a new recognition of the Union's legal personality is required.
- New competences
The EU has conferred upon it as new 'shared competences' the areas of territorial cohesion, energy, and space. These are areas where the EU may act alongside its individual member states.
The EU has conferred upon it as new areas of 'supporting, coordinating or complementary action' the areas of tourism, sport, and administrative co-operation.
- Criminal justice proceedings
Member states will continue to co-operate in some areas of criminal judicial proceedings where they agree to do so, as at present. Under the Constitution, seven new areas of co-operation are added:
- trafficking in persons;
- offences against children;
- drugs trafficking;
- arms trafficking;
- Solidarity clause
The new solidarity clause specifies that any member state which falls victim to a terrorist attack or other disaster will receive assistance from other member states, if it requests it. This was already the case in practice, but it is now officially codified. The specific arrangements will be decided by the Council of Ministers.
- European Public Prosecutor
- Charter of Fundamental Rights of the European Union
The Constitution includes a copy of the Charter already agreed to by all EU member states. This is included in the Constitution so that EU institutions themselves are obliged to conform to the same standards of fundamental rights.
- Simplified jargon and legal instruments
The Constitution makes an effort to simplify jargon and reduce the number of EU legal instruments (ways in which EU countries may act). These are also unified across areas of policy (referred to as pillars of the European Union in previous treaties).
- 'European Regulations' (of the Community pillar) and 'Decisions' (of the Police and Judicial Co-operation in Criminal Matters pillar) both become referred to as European laws.
- 'European Directives' (of the Community pillar) and 'Framework Decisions' (of the PJC pillar) both become referred to as 'European framework laws'.
- 'Conventions' (of the PJC pillar) are done away with, replaced in every case by either European laws or European framework laws.
- 'Joint actions ' and 'Common positions' (of what is now the Common Foreign and Security Policy Pillar) are both replaced by 'decisions'.
- Merging of High Representative and external relations Commissioner
In the new Constitution, the present role of High Representative for the Common Foreign and Security Policy is amalgamated with the role of the Commissioner for External Relations.
This creates a new Union Minister for Foreign Affairs who is also a Vice President of the Commission. This individual will be responsible for co-ordinating foreign policy across the Union. He or she will also be able to represent the EU abroad in areas where member states agree to speak with one voice.
Functioning of the institutions
- Qualified majority voting
More day-to-day decisions in the Council of Ministers are to be taken by qualified majority voting, requiring a 55 per cent majority of member states representing a 65 per cent majority of citizens. (The 55 per cent is raised to 72 per cent when the Council is acting on its own initiative rather than on a legislative proposal.)
The unanimous agreement of all member states is still required for decisions on more sensitive issues, such as tax, social security, foreign policy and defence.
- President of the European Council
The six-month rotating Presidency of the European Council will switch to a chair chosen by the heads of government, in office for eighteen months and renewable once. The role will be the same as now, i.e. administrative and non-executive.
- President of the Council of Ministers
The six-month rotating Presidency of the Council of Ministers, which currently coincides with the Presidency of the European Council, will be changed to an eighteen-month rotating Presidency shared by a trio of member countries, in an attempt to provide more continuity.
The exception is the Council's Foreign Affair configuration, which will be chaired by the newly-created Union Minister for Foreign Affairs.
- Smaller Commission
Parliamentary power and transparency
- President of the Commission
- Parliament as co-legislature
The European Parliament acquires equal legislative power with the Council in virtually all areas of policy (previously, it had this power in most cases, but not all).
- Meeting in public
The Council of Ministers will be required to meet in public when debating new laws. Currently, it meets in public for texts covered under the Codecision procedure since the decision of the European Council of Sevilla.
The final say over the EU's annual budget is given to the European Parliament. Agricultural spending is no longer ring-fenced, and is brought under the Parliament's control.
- Role of national parliaments
Member states' national parliaments are given a new role in scrutinising proposed EU laws, and are entitled to object if they feel a proposal oversteps the boundary of the Union's agreed areas of responsibility.
- Popular mandate
The Commission is invited to consider any proposal "on matters where citizens consider that a legal act of the Union is required for the purpose of implementing the Constitution" which has the support of one million citizens. The mechanism by which this will be put into practice has yet to be agreed. (See Article I-46(4) for details.)
Further integration, amendment and withdrawal
- Enhanced co-operation
There is a tightening of existing rules for 'enhanced cooperation', where some member states may choose to act together more closely and others not. A minimum of two thirds of member states must now participate in any enhanced cooperation, and the agreement of the European Parliament is needed. The option for enhanced cooperation is also widened to all areas of agreed EU policy.
- Treaty revisions
Previously, alteration of treaties was decided by unanimous agreement of the European Council behind closed doors. Any amendments to the Constitutional treaty, however, will involve the convening of a new Convention, similar to that chaired by Valéry Giscard d'Estaing in drafting the Constitution itself. This process may be bypassed if the European Parliament agrees. However, small revisions removing national vetoes can be made by unanimous agreement of the European Council through the Passerelle Clause (Article IV-444 ).
The final say on adopting proposals will continue to rest with the Council, and needs unanimity of the Council.
- Withdrawal clause
A new clause allows for the withdrawal of any member state without renegotiation of the Constitution or violation of treaty commitments. Under this clause, when a country notifies the Council of its intent to withdraw, a settlement is agreed in the Council with the consent of Parliament. If negotiations are not agreed within two years, the country leaves anyway.
Points of contention
Length and complexity
Critics of the Constitution point out that, compared to many existing national constitutions (such as the 4,600-word US Constitution), the European Constitution is very long, at around 324 pages and over 60,000 words in its English text.
Proponents respond by stating that the document nevertheless remains considerably shorter and less complex than the existing set of treaties that it consolidates. Defenders also point out that it must logically be longer, since it is not an all-embracing, general constitution, but rather a document that precisely delineates the limited areas where the European Union has competence to act over and above the competences of member states.
Qualified majority voting
Qualified majority voting is extended to an additional 26 decision-making areas that had previously required unanimity. Opponents of the Constitution argue that this demonstrates a palpable loss of sovereignty and decision-making power for individual countries. Defenders argue that these provisions only apply in the areas where Member States have agreed it should ("competency") and not otherwise; that it was necessary to prevent decision-making from grinding to a halt in the enlarged Union. (In the past, there have been cases when it appeared that "veto trading" was being used tactically rather than for issues of principle.) Further, the "qualified majority voting" mechanism is structured such that a blocking minority is not difficult to achieve for matters of substance.
Union law and national law
Critics sometimes claim that it is unacceptable for the Constitution to enshrine European laws as taking precedence over national laws, and argue that this is an erosion of national sovereignty.
Defenders of the constitution point out that it has always been the case that EU law supersedes national law, and that it has long been accepted in European nations that international law which a nation subscribed to overrides national law. The proposed Constitution does not change this arrangement for either existing or future EU law. However, the question of whether the arrangement is considered acceptable in the first place is still an issue for debate.
With the widening of Qualified Majority Voting also envisaged in the constitution, however, the issue of the primacy of EU law becomes more sensitive. This is because there is an increase in the number of areas in which laws can be passed by majority vote. It is therefore possible for an individual country to vote against a proposal (unsuccessfully) and subsequently find its national legislature to be bound by it.
Trappings of statehood
It has been argued that the constitution introduces a number of elements that are traditionally the province of sovereign states: flag, motto, anthem. This is something many see as a shift towards the future creation of a single European state, and the corresponding loss of national identity. Many eurosceptics oppose the constitution for this reason.
Defenders of the constitution have pointed out that none of these elements are new, and that many of them are also used by other international organisations. They also argue that key principles enshrined in the constitution, such as the principles of conferral and subsidiarity, are designed to reinforce the status of member states as cooperating sovereign nations, not to erode it.
It has likewise been argued that to call the document a 'Constitution' rather than a 'treaty' implies a change in the nature of the EU, from an association of cooperating countries to a single state or something approaching a state. In response, it has been pointed out that many international organisations, including the World Health Organisation, have constitutions, without this implying that they are states. From a legal point of view the European Constitution will still be a treaty between independent states.
Lack of democracy
It has been argued that the proposed constitution still grants a lot of power to the European Commission, which is appointed by the member states, not directly elected. The European Parliament, seen by some as the true voice of the people as the only directly elected EU institution, still cannot propose new laws, for example.
Some of the articles, which may seem very democratic at a first glance are said by some to be pointless when read more carefully. For instance, the obligation for the Commission to consider a petition by 1 million citizens only invites such a petition to be considered. It is open to the Commission to decide how to react, including ignoring the petition if they wish. It is also worth mentioning that european citizens could already submit a petition to the European Parliament, see .
Defenders of the Constitution point out that the European Parliament does have the power to oblige the Commission to bring forward a legislative proposal which Parliament and Council may then amend as they see fit. It has been argued that this is sufficient to avoid what might otherwise be regarded as a democratic deficit. It has also been argued that it would not be feasible in practice for the Commission to ignore a mandate from a million citizens, despite the wording in the Constitution. Further, the Commission has no power to enact laws; like a Civil Service, it may only draft proposals into a legal form for others to ratify or reject. Its only real power is to investigate breaches of agreements that the member states themselves have made.
See also democratic deficit.
Article I-41(3) states that: "Member States shall undertake progressively to improve their military capabilities". It has been argued that this will prevent all partial disarming of any of the states and require them to increase military capabilities without taking into account the geopolitical situation, or the will of the people. The creation of an European weapon office may also lead to an increase of the worldwide arms race, according to some analyses.
Others point out that the same article limits any EU joint military action to "peace-keeping, conflict prevention and strengthening international security" based on UN principles. It is only under this framework that countries agree to develop their military capabilities.
Some commentators have expressed a fear that the proposed Constitution may force upon European countries a Neo-Liberal economic framework which will threaten the European social model. The principles of the "free movement of capital" (both inside the EU and with third countries), and of "free and undistorted competition", are stated several times, and it has been argued that they cover all areas, from healthcare to energy to transport.
The European Central Bank remains independent from any democratic institution, and its only purpose is to fight inflation. This contrasts with other organisations, such as the Federal Reserve, which also has the goal of fighting unemployment.
It has also been argued that existing national Constitutions do not fix economic policies inside the Constitution itself: It is more common for elected governments to retain the power to decide on economic policy.
Unanimity requirement for changes
The major provisions contained in Parts I , II and IV of the Constitution can only be changed with the unanimous agreement of all countries. This requirement for unanimity will effectively prevent further transfer of competences to the Union if a single member state objects.
Defenders of the Constitution point out that it has always required unanimity among member state governments to change a treaty, so this is nothing more than a retaining of the status quo.
It should also be mentioned that there is provision for enhanced cooperation among member countries, under which some countries can choose to integrate more closely in some areas than others. However, this does not constitute an opt-out from the universally agreed provisions in the Constitution. Moreover, enhanced co-operation can be established only the conditions described in Article III-419 , according to which both the Commission and the Council, acting unanimously, must agree. In fact, it is easier to establish an enhanced cooperation under the present law of the Union (as modified by the treaty of Nice); compare for example Article III-419 of the constitutional treaty and Article 27-E of the current treaty.
At the same time, Article IV-444 (the Passerelle Clause) allows decisions currently subject to unanimity to be shifted to Qualified Majority Voting if all governments agree, without the need for ratification by national parliaments (though national parliaments would have a six-month period in which they could object if they wish).
Also, for the first time, the Constitution provides an explicit means by which a member state can entirely withdraw from the EU without violating treaty obligations. However, some people have pointed out that this just formalises the existing situation, given that Greenland successfully negotiated withdrawal using this method in 1985.
Some opponents argue that certain important rights, such as that of habeas corpus, are not provided for or recognised by the Constitution. The Charter of Fundamental Rights of the Union forms Part II of the Constitution, and habeas corpus is not explicity mentioned among its provisions. However, Article I-9(2) of the Constitution says: "The Union shall accede to the European Convention for the Protection of Human Rights and Fundamental Freedoms", Article 5 of which includes the following:
- Everyone who is deprived of his liberty by arrest or detention shall be entitled to take proceedings by which the lawfulness of his detention shall be decided speedily by a court and his release ordered if the detention is not lawful.
Consequently, while the Constitution makes no explicit mention of habeas corpus, the Union must still uphold it because it is constitutionally bound to accede to the European Convention on Human Rights. Advocates of the Constitution often allege that in cases like this, eurosceptics seek to mislead the public by encouraging them to think that if the Constitution is adopted, habeas corpus will be abolished or might not be guaranteed in the future.
Development of the Treaties into EU Constitution
External links and references
- A Constitution for Europe — EU's official Constitution site, including full text in the official languages.
- Reader-friendly edition of the EU Constitution — Highlights and commentary (PDFs).
- History of the Constitution — Academic site linking to many documents concerning the preparation, negotiation and ratification stages of the Constitution and previous treaties.
- Constitution search engine — On-line search engine for the constitution text.
- News coverage:
- Campaigning and advocacy sites:
- Discussion sites:
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details | http://www.all-science-fair-projects.com/science_fair_projects_encyclopedia/Treaty_establishing_a_Constitution_for_Europe | 13 |
18 | Endocarditis (EN-do-kar-DI-tis) is an infection of the inner lining of your heart chambers and valves. This lining is called the endocardium. The condition also is called infective endocarditis (IE).
The term "endocarditis" also is used to describe an inflammation of the endocardium due to other conditions. This article only discusses endocarditis related to infection.
IE occurs if bacteria, fungi, or other germs invade your bloodstream and attach to abnormal areas of your heart. The infection can damage the heart and cause serious and sometimes fatal complications.
IE can develop quickly or slowly. How the infection develops depends on what type of germ is causing it and whether you have an underlying heart problem. When IE develops quickly, it's called acute infective endocarditis. When it develops slowly, it's called subacute infective endocarditis.
IE mainly affects people who have:
- Damaged or artificial heart valves
- Congenital heart defects (defects present at birth)
- Implanted medical devices in the heart or blood vessels
People who have normal heart valves also can get IE. However, the condition is much more common in people who have abnormal hearts.
Certain factors make it easier for bacteria to enter your bloodstream. These factors also put you at higher risk for the infection. For example, if you've had IE before, you're at higher risk for the infection.
Other risk factors include having poor dental hygiene and unhealthy teeth and gums, using intravenous (IV) drugs, and having catheters or other medical devices in your body for long periods.
Common symptoms of IE are fever and other flu-like symptoms. Because the infection can affect people in different ways, the signs and symptoms vary. IE also can cause complications in many other parts of the body besides the heart.
If you're at high risk for IE, seek medical care if you have signs or symptoms of the infection, especially a fever that persists or unexplained fatigue (tiredness).
IE is treated with antibiotics for several weeks. You also may need heart surgery to repair or replace heart valves or remove infected heart tissue.
Most people who are treated with the proper antibiotics recover. But if the infection isn't treated, or if it persists despite treatment (for example, if the bacteria are resistant to antibiotics), it's usually fatal.
If you have signs or symptoms of IE, you should see your doctor as soon as you can, especially if you have abnormal heart valves.
What Causes Endocarditis?
Infective endocarditis (IE) occurs when bacteria, fungi, or other germs invade your bloodstream and attach to abnormal areas of your heart. Certain factors increase the risk of germs attaching to a heart valve or chamber and causing an infection.
A common underlying factor in IE is a structural heart defect, especially faulty heart valves. Usually your immune system will kill germs in your bloodstream. However, if your heart has a rough lining or abnormal valves, the invading germs can attach and multiply in the heart.
Other factors, such as those that allow germs to build up in your bloodstream, also can play a role in causing IE. Common activities, such as brushing your teeth or having certain dental procedures, can allow bacteria to enter your bloodstream. This is even more likely to happen if your teeth and gums are in poor condition.
Having a catheter or other medical devices inserted through your skin, especially for long periods, also can allow bacteria to enter your bloodstream. People who use intravenous (IV) drugs also are at risk for infections due to germs on needles and syringes.
Bacteria also may spread to the blood and heart from infections in other parts of the body, such as the gut, skin, or genitals.
As the bacteria or other germs multiply in your heart, they form clumps with other cells and matter found in the blood. These clumps are called vegetations (vej-eh-TA-shuns).
As IE worsens, pieces of the vegetations can break off and travel to almost any other organ or tissue in the body. There, the pieces can block blood flow or cause a new infection. As a result, IE can cause a wide range of complications.
Heart problems are the most common complication of IE. They occur in one-third to one-half of all people who have the infection. These problems may include a new heart murmur, heart failure, heart valve damage, heart block, or, rarely, a heart attack.
Central Nervous System Complications
These complications occur in as many as 20 to 40 percent of people who have IE. Central nervous system complications most often occur when bits of the vegetation, called emboli (EM-bo-li), break away and lodge in the brain.
There, they can cause local infections (called brain abscesses) or a more widespread brain infection (called meningitis).
Emboli also can cause a stroke or seizures. This happens if they block blood vessels or affect the brain's electrical signals. These complications can cause long-lasting damage to the brain and may even be fatal.
Complications in Other Organs
IE also can affect other organs in the body, such as the lungs, kidneys, and spleen.
Lungs. The lungs are especially at risk when IE affects the right side of the heart. This is called right-sided infective endocarditis.
Kidneys. IE can cause kidney abscesses and kidney damage. IE also can cause inflammation of the internal filtering structures of the kidneys.
Signs and symptoms of kidney complications include back or side pain, blood in the urine, or a change in the color or amount of urine. In a small number of people, IE can cause kidney failure.
Spleen. The spleen is an organ located in the left upper part of the abdomen near the stomach. In as many as 25 to 60 percent of people who have IE, the spleen enlarges (especially in people who have long-term IE).
Sometimes, emboli also can damage the spleen. Signs and symptoms of spleen problems include pain or discomfort in the upper left abdomen and/or left shoulder, a feeling of fullness or the inability to eat large meals, and hiccups.
Who Is At Risk for Endocarditis?
Infective endocarditis (IE) is an uncommon condition that can affect both children and adults. It's more common in men than women.
IE typically affects people who have abnormal hearts or other conditions that make them more likely to get the infection. In some cases, IE does affect people who were healthy before the infection.
Major Risk Factors
The germs that cause IE tend to attach and multiply on damaged, malformed, or artificial heart valves and implanted medical devices. Certain conditions put you at higher risk for IE. These include:
- Congenital heart defects (defects that are present at birth). Examples include a malformed heart or abnormal heart valves.
- Artificial heart valves; an implanted medical device in the heart, such as a pacemaker wire; or an intravenous (IV) catheter in a blood vessel for a long time.
- Heart valves damaged by rheumatic fever or calcium deposits that cause age-related valve thickening. Scars in the heart from a previous case of IE also can damage heart valves.
- IV drug use, especially if needles are shared or reused, contaminated substances are injected, or the skin isn't properly cleaned before injection.
What Are the Signs and Symptoms of Endocarditis?
Infective endocarditis (IE) can cause a range of signs and symptoms that can vary from person to person. Signs and symptoms also can vary over time in the same person.
Signs and symptoms differ depending on whether you have an underlying heart problem, the type of germ causing the infection, and whether you have acute or subacute IE.
Signs and symptoms of IE may include:
- Flu-like symptoms, such as fever, chills, fatigue (tiredness), aching muscles and joints, night sweats, and headache.
- Shortness of breath or a cough that won't go away.
- A new heart murmur or a change in an existing heart murmur.
- Skin changes such as:
- Overall paleness.
- Small, painful, red or purplish bumps under the skin on the fingers or toes.
- Small, dark, painless, flat spots on the palms of the hands or the soles of the feet.
- Tiny spots under the fingernails, on the whites of the eyes, on the roof of the mouth and inside of the cheeks, or on the chest. These spots are from broken blood vessels.
- Nausea (feeling sick to your stomach), vomiting, a decrease in appetite, a sense of fullness with discomfort on the upper left side of the abdomen, or weight loss with or without a change in appetite.
- Blood in the urine.
- Swelling in the feet, legs, or abdomen.
How Is Endocarditis Diagnosed?
Your doctor will diagnose infective endocarditis (IE) based on your risk factors, your medical history and signs and symptoms, and the results from tests.
Diagnosis of the infection often is based on a number of factors, rather than a single positive test result, sign, or symptom.
Blood cultures are the most important blood tests used to diagnose IE. Blood is drawn several times over a 24-hour period. It's put in special culture bottles that allow bacteria to grow.
Doctors then identify and test the bacteria to see which antibiotics will kill them. Sometimes the blood cultures don't grow any bacteria, but the person still has IE. This is called culture-negative endocarditis, and it requires antibiotic treatment.
More standard blood tests also are used to diagnose IE. For example, a complete blood count may be used to check the number of red and white blood cells in your blood. Blood tests also may be used to check your immune system and to check for inflammation.
Echocardiography is a painless test that uses sound waves to create pictures of your heart. Two types of echocardiography are useful in diagnosing IE.
Transthoracic (tranz-thor-AS-ik) echocardiogram. For this painless test, gel is applied to the skin on your chest. A device called a transducer is moved around on the outside of your chest.
This device sends sound waves called ultrasound through your chest. As the ultrasound waves bounce off the structures of your heart, a computer converts them into pictures on a screen.
Your doctor uses the pictures to look for vegetations, areas of infected tissue (such as an abscess), and signs of heart damage.
Because the sound waves have to pass through skin, muscle, tissue, bone, and lungs, the pictures may not have enough detail. Thus, your doctor may recommend a transesophageal (tranz-ih-sof-uh-JEE-ul) echocardiogram (TEE).
Transesophageal echocardiogram. For this test, a much smaller transducer is attached to the end of a long, narrow, flexible tube. The tube is passed down your throat. Before the procedure, you're given medicine to help you relax, and your throat is sprayed with numbing medicine.
The doctor then passes the transducer down your esophagus (the passage from your mouth to your stomach). Because this passage is right behind the heart, the transducer can get clear pictures of the heart's structures.
An EKG is a simple, painless test that detects heart's electrical activity. It shows how fast your heart is beating, whether your heart rhythm is steady or irregular, and the strength and timing of electrical signals as they pass through your heart.
An EKG typically isn't used to diagnose IE. However, it may be done to see whether IE is affecting your heart's electrical activity.
For this test, soft, sticky patches called electrodes are attached to your chest, arms, and legs. You lie still while the electrodes detect your heart's electrical signals. A machine records these signals on graph paper or shows them on a computer screen. The entire test usually takes about 10 minutes.
How Is Endocarditis Treated?
Infective endocarditis (IE) is treated with antibiotics and sometimes with heart surgery.
Antibiotics usually are given for 2 to 6 weeks through an intravenous (IV) line inserted into a vein. You're often hospitalized for at least the first week or more of treatment. This allows your doctor to make sure your infection is responding to the antibiotics.
If you're allowed to go home before the treatment is done, the antibiotics are almost always continued by vein at home. You'll need special care if you get IV antibiotic treatment at home. Before you leave the hospital, your medical team will arrange for you to receive home-based care so you can continue your treatment.
You also will need close medical followup, usually by a team of doctors. This team often includes a doctor who specializes in infectious diseases, a cardiologist (heart specialist), and a heart surgeon.
In some cases, surgery is needed to repair or replace a damaged heart valve or to help clear up the infection. IE due to an infection with fungi often requires surgery. This is because this type of IE is harder to treat than IE due to bacteria.
How Can Endocarditis Be Prevented?
If you're at risk for infective endocarditis (IE), you can take steps to prevent the infection and its complications.
- Be alert to the signs and symptoms of IE. Contact your doctor right away if you have any of these signs or symptoms, especially a persistent fever or unexplained fatigue.
- Brush and floss your teeth regularly, and have regular dental checkups. Germs from a gum infection can enter your bloodstream.
- Avoid body piercing, tattoos, or other procedures that may allow germs to enter your bloodstream.
New research shows that not everyone at risk for IE needs to take antibiotics before routine dental exams and certain other dental or medical procedures.
Let your health care providers, including your dentist, know if you're at risk for IE. They can tell you whether you need such antibiotics before exams and procedures.
- Endocarditis is an infection of the inner lining of your heart chambers and valves. The condition also is called infective endocarditis (IE).
- IE occurs if bacteria, fungi, or other germs invade your bloodstream and attach to abnormal areas of your heart. The infection can damage the heart and cause serious and sometimes fatal complications.
- IE can develop quickly or slowly depending on what type of germ is causing it and whether you have an underlying heart problem.
- IE mainly affects people who have damaged or artificial heart valves, congenital heart defects (defects that are present at birth), or implanted medical devices in the heart or blood vessels.
- IE is an uncommon condition that can affect both children and adults. It's more common in men than women.
- IE can cause a range of signs and symptoms that can vary from person to person. Signs and symptoms also can vary over time. Common symptoms are fever and other flu-like symptoms.
- Your doctor will diagnose IE based on your risk factors, your medical history and signs and symptoms, and the results from tests. Diagnosis of the infection often is based on a number of factors, rather than a single positive test result, sign, or symptom.
- IE is treated with antibiotics and sometimes with heart surgery. Antibiotics usually are given for 2 to 6 weeks through an intravenous (IV) line inserted into a vein. You're often hospitalized for at least the first week or more of treatment. In some cases, surgery is needed to repair or replace a damaged heart valve or to help clear up the infection.
- If you're at risk for IE, you can take steps to prevent the infection and its complications. Be alert to the signs and symptoms of IE. Contact your doctor right away if you have any of these signs and symptoms. Brush and floss your teeth regularly, and have regular dental checkups. Avoid body piercing, tattoos, or other procedures that may allow germs to enter your bloodstream.
- Let your health care providers, including your dentist, know if you're at risk for IE. They can tell you whether you need antibiotics before routine dental exams and certain other dental or medical procedures that can let germs into your bloodstream. | http://surgery.ucsf.edu/conditions--procedures/endocarditis.aspx | 13 |
16 | Deforestation is the destruction or clearing of forested lands, usually for the purposes of expanding agricultural land or for timber harvesting. When the process is conducted by clearcutting (removal of most or all of the canopy tree growth, leaving few or no live or dead trees standing) or when mass forest burning occurs, significant losses of habitat and biodiversity may result, including the erosion of biological community structure and the extinction of species. Deforestation is proceeding at a rapid pace in may areas of the world, especially in the tropical and boreal forest regions of the earth, with annual net loss of forests during the 1990s estimated in the range of nine to sixteen million hectares per annum. Large scale deforestation may have adverse impacts on biosequestration of atmospheric carbon dioxide, exacerbating greenhouse gas buildup, through the release of stored carbon in tree biomass and reduced CO2 fixation rates due to loss of trees. Deforested regions are often subject to accelerated rates of soil erosion, increased surface runoff and sedimentation of streams and rivers, reduced infiltration and ground water recharge, with adverse water quality impacts on surface water and ground water resources.
Root causes of deforestation include a broad range of economic and social factors, such as (a) poorly formulated property rights systems, (b) widespread poverty and overpopulation, placing pressure on marginally productive lands for subsistence, (c) expansion of agriculture to feed a dramatically increasing human population, (d) short term view of forest management economics at the expense of long term forest productivity and (e) lax forest management. The impacts of deforestation can include the displacement of indigenous peoples from their historic living areas, or the loss of traditional livelihoods and food production and procurement systems. At the current time there is a strong correlation between widespread deforestation and countries which have a low per capita income, deforestation fom commercial timber harvesting is still a problem in many industrialized countries as well.
Types of deforestation
Chief methods of deforestation are: (a) land clearing to prepare for livestock grazing or expansion of crop planting, (b) commercial logging and timber harvests, (c) slash-and-burn forest cutting for subsistence farming, and (d) natural events such as volcanic eruption, stand windthrow from hurricanes, catastrophic forest fires, or changes in local climate and rainfall regimes. It is important to note that those natural factors which may cause deforestation represent only a small fraction of observered deforestation worldwide during historical time.
Causes of deforestation
See Main Article: Causes of deforestation
The predominant driver for deforestation world wide is the clearing of trees to expand agriculture, according to the United Nations Framework Convention on Climate Change. Subsistence agriculture in poor countries is responsible for 48% of deforestation; with commercial agriculture is responsible for 32% of deforestation; and commerical logging is responsible for only 14% of deforestation; charcoal and other fuel wood removals comprise less than 6% of deforestation, but those uses can generally be assigned to subsistence practises.
The degradation of forest ecosystems has also been traced to economic incentives that make forest conversion appear more profitable than forest conservation. Many important forest functions lack readily visible markets, and hence, are without economic value that is apparent to the forest owners or the communities that depend on forests for their well-being. Considerable deforestation arises from a lack of security of property rights and from the absence of effective enforcement of conservation policies, both factors particularly prominent in developing countries; in some cases, terrorism and governmental corruption are concomitant factors in forest losses.
Small scale deforestation was practiced by some societies tens of thousands of years before the present, with some of the earliest evidence of deforestation appearing in the Mesolithic period. These initial clearings were likely devised to convert closed forests into more open ecosystems favourable to game animals. With the advent of agriculture in the mid-Holocene, greater areas were deforested, and fire becamet an increasing methodl to clear land for crops. In Europe by 7000 BC. Mesolithic hunter-gathers employed fire to create openings for red deer and wild boar. From pollen core records, in Great Britain, shade-tolerant species such as oak and ash were being replaced by hazels, brambles, grasses and nettles. Removal of the forests led to decreased transpiration, resulting in increased formation of raised peat bogs. Widespread decrease in elm pollen across Europe between 8400-8300 BC and 7200-7000 BC, starting in southern Europe and gradually moving north to Great Britain, likely represents land clearing by fire at the onset of Neolithic agriculture.
The Neolithic period ushered in extensive deforestation for agriculture. Stone axes were being made from about 3000 BC not only from flint, but from a wide array of hard rocks from across Africa, Britain, Scandinavia and North America. They include the noted Langdale axe industry in the English Lake District, quarries at Penmaenmawr in North Wales and numerous other locations. Rough-outs were made locally near the quarries, wih some polished locally to yield a fine finish. This step not only increased mechanical strength of an axe, but also facilitated penetration of timber. Flint continued to be utilised from sources such as Grimes Graves as well as numerous mines across Europe. Evidence of deforestation has been found in Minoan Crete; for example the environs of the Palace of Knossos were severely deforested in the Bronze Age.
Ancient and medieval times
In ancient Greece, regional analyses of historic erosion and alluviation demonstrate that massive erosion follows eforestation, by about 500-1,000 years the introduction of farming in the various regions of Greece, ranging from the later Neolithic to the Early Bronze Age. The thousand years following the middle of the first millennium BC saw substantial instances soil erosion in numerous locales. The historic siltation of ports along certain coasts of Europe (Bruge) and the coasts of the Black Sea and southern coasts Asia Minor (e.g Tulcea, Clarus, and the ports of Ephesus, Priene and Miletus, where harbours were reduced in use or abandoned because of the silt deposited by the Danube and Meander Rivers) and in coastal Syria during the last centuries BC.
By the end of the Middle Ages in Europe, there were severe shortages of food, fuel and building materials since most of the primordial forests had been cleared. Transition to a coal burning economy and cultivation of potatoes and maize allowed continuity of the already large European population to survive. Easter Island has suffered from an ecodisaster, aggravated by agriculture and deforestation. The disappearance of the island's palm trees slightly predates and suggests correlation with the significant decline of its civilization starting at least as early as the 1600s AD; the societal collapse of that period can be linked to deforestation and over-exploitation of other resources.
Post industrial era
Since the mid nineteenth century worldwide deforestation has sharply accelerated, driven by the expanding human population and industrialisation. Approximately one half of the Earth's mature tropical forests (between 7.5 million and 8.0 million sq. km of the original 15 million to 16 million sq. km that until 1947 covered the Earth) have now been cleared. Some scientists have asserted that unless significant forest protection mitigation measures are adopted, by the year 2030 that 90 percent of the planet's forest will have been removed, as well as hundreds of thousands of flora and fauna species rendered extinct.
The adverse environmental impacts associated with largescale deforestation can include significant changes in ecological, hydrological, and climatic processes at local and regional levels. The ecological consequences include habitat loss and habitat fragmentation and adverse changes in local species richness and biodiversity. In some cases, increased local species diversity associated with the destruction or fragmentation of old-growth forests may actually erode biological diversity at regional scales, through the replacement of rare species with restricted distributions (e.g., spotted owls, spectacled bears, colobus monkeys) by common species that are habitat generalists, human commensals, or invasive species. Hydrological impacts stem from the loss of infiltration capacity associated with canopy interception and leaf litter detritus absorbtion, with resulting acceleration of surface runoff flows at the expense of groundwater recharge; these impacts aggravate problems from water pollution and sedimentation, and may alter the balance and volumes of ground water and surface water flows regimes available to sustain riparian ecosystems. Soil loss may occur as the result of active surface erosion, and through the loss of organic matter accumulation. Climate impacts relate to the carbon sink reductions engendered by deforestation, which long term effects have contributed to the buildup of atmospheric carbon dioxide.
- ^ Pekka E. Kauppi, Jesse H. Ausubel, Jingyun Fang Alexander S. Mather, Roger A. Sedjo and Paul E. Waggoner. 2006. Returning forests analyzed with the forest identity. Proceedings of the National Academy of Sciences of the United States of America
- ^ United Nations FCCC. 2007. Investment and financial flows to address climate change 81 pages
- ^ David W. Pearce. 2001. The Economic Value of Forest Ecosystems. Ecosystem Health, University College, London, UK, Vol.7, no.4, pages 284–296
- ^ Oxford Journal of Archaeology. Clearances and Clearings: Deforestation in Mesolithic/Neolithic Britain
- ^ C. Michael Hogan. 2007. Knossos fieldnotes, The Modern Antiquarian
- ^ Tjeerd H. van Andel, Eberhard Zangger and Anne Demitrack, .1990. Land Use and Soil Erosion in Prehistoric and Historical Greece. Journal of Field Archaeology. 17.4 pages 379-396
- ^ Norman F. Cantor. 1993. The Civilization of the Middle Ages: The Life and Death of a Civilization. page 564
- ^ E. O. Wilson, 2002, The Future of Life, Vintage ISBN 0-679-76811-4
- ^ Ron Nielsen. 2006. The Little Green Handbook: Seven Trends Shaping the Future of Our Planet, Picador, New York ISBN 978-0312425814 | http://www.eoearth.org/article/Deforestation?topic=58071 | 13 |
27 | Unlike the States, which have broad police power to pass laws promoting the general welfare of their people, the actions of the United States government are limited by the Constitution which vests its three branches with specific authority while reserving all other powers “to the States respectively, or to the people.” Congress’ lawmaking powers are laid out in Article I, Section 8, and all Acts of Congress must fall under one of these provisions. These include the power to levy taxes, raise and support an army, declare war, establish post offices, and grant patents and copyrights. But the vast majority of federal laws are enacted under the aegis of the commerce clause which grants Congress the authority “to regulate commerce with foreign nations, among the several states, and with the Indian tribes.”
In the early days of the Republic, the United States government was relatively modest with regards to the number and scope of the laws it passed. Jurists and statesmen debated over what activities fit under the definition of “interstate commerce.” In Gibbons v. Ogden (1824), the Marshall Supreme Court ruled that commerce included not only the trade of commodities but also navigation of waterways and all other forms of intercourse. On the other hand, Alexander Hamilton, in the Federalist Papers, argued that such productive activities as agriculture and mining did not fall within the label of “commerce” but were rather something that occurred prior to commerce.
The issue came to a head in the 1930s; in 1933, President Roosevelt signed into law the National Recovery Act-- a sweeping system of economic and societal reforms meant to alleviate the hardships of the Great Depression. In 1935, however, the Supreme Court struck down many of the new law’s provisions, applying a narrow construction of the commerce clause. Among other things, the Court ruled that the United States government could not regulate labor (e.g. wages, workplace conditions) because this was not commerce but something which occurred alongside commerce (in 1918 the Court had struck down a law banning child labor).
Roosevelt, who enjoyed wide political support in the legislature and among voters, was none too pleased with these court decisions, and he proposed a law in 1937 which would increase the number of justices on the Supreme Court (the Constitution is silent in regards to the Court’s composition) thus allowing the president to “pack” the court with judges who would support a more agreeable interpretation of the commerce clause. Critics argued that this was a cynical political move that would destroy any pretense of an independent judiciary; some called it dictatorial.
At any rate, the Judicial Procedures Reform Bill of 1937 failed, but by the end of Roosevelt’s presidency the Court had adopted the more expansive interpretation of the commerce clause. This was due in large part to the fact that, during his four terms in office, President Roosevelt had the opportunity to replace several justices who retired or died; moreover one justice underwent a “philosophical conversion” in 1937 and came out in favor of a broader definition of commerce-- a move referred to by judicial historians as “the switch in time that saved nine.”
After that it started looking as though there was effectively no limit to the types of federal laws that could be enacted: until the 1990s, students were taught that the commerce clause had become a rubber stamp and that Congress could pass any law it wanted so long as its text paid some lip service to “interstate commerce” or “the United States postal service.”
But then, in the 1995 case United States v Lopez, the Rehnquist Court struck down the Gun-Free School Zones Act, which made possession of firearms within the vicinity of a school a federal crime, finding that the connection between guns in school zones and interstate commerce was too tenuous. Likewise, in a 2000 decision (United States v Morrison), the Court ruled that Congress had overstepped its bounds in passing the Violence Against Women Act. The government’s arguments that domestic violence affected commerce because it kept women away from the workplace and marketplace and that violence against women increased healthcare costs for employers and for the States were not enough to convince the Court.
The parties challenging the Act are being represented by Paul Clement, a former solicitor general under George W Bush. Clement argued seven cases before the Supreme Court this term and New York Magazine described him as “a sort of anti–solicitor general-- the go-to lawyer for some of the Republican Party’s most significant, and polarizing, legal causes.” Yet, the article also points out that he is admired and respected by liberals and conservatives alike and that he is known for being able to cut through the politics so as to tackle the legal question at the heart of a case. We can see this in the Obamacare case where his arguments are well- reasoned and focus on the limits to Congressional authority.
Clement argues that the individual mandate is unprecedented in that it forces citizens to purchase something and enter into contracts. Thus Congress is not merely regulating the market, but rather it is requiring people to enter the market (the health insurance market). During oral arguments, Justice Scalia suggested that if Congress could constrain citizens to buy health insurance policies-- for their own good and to contain the costs of healthcare for everyone-- then what’s to stop them requiring everyone to buy a gym membership (something else that would be good for their health and arguably reduce the money spent treating obesity-related diseases).
It’s interesting to note that everyone agrees that if Congress had enacted a “single payer system”-- where citizens paid an extra tax to the federal government and, in return, the government provided them with health insurance coverage-- that this would pass constitutional muster. Such a scheme would be analogous to Social Security, where all workers are required to pay into the system and in return they receive a government pension when they reach retirement age. This falls under Congress’ power to levy taxes. Of course during the healthcare debate the idea of a single payer system was quickly rejected for political reasons. Many dismissed it as too radical, and critics pointed to the shortcomings of the National Health Service in countries like the UK. Furthermore, it’s unclear what position would be left for private health insurance companies under such a system, and these are of course large, profitable corporations with a powerful lobby.
The Supreme Court also concedes that Congress would have the power to require the uninsured to purchase a policy at the time they avail themselves of the healthcare market (e.g. when they are “on the operating table”). Obviously a system where people could wait until they need medical treatment to purchase insurance would be unsustainable.
It is the job of the solicitor general to represent the United States government before the Supreme Court, and thus to defend the constitutionality of federal laws. In response to the challengers’ argument that the individual mandate forces people to enter the market, supporters assert that the relevant market here is not the health insurance market but the healthcare industry as a whole-- and everyone will need to avail themselves of this market sooner or later.
The healthcare industry is unique, and uninsured individuals have an enormous, deleterious impact on the system. Hospitals and healthcare providers take it for granted that some of the people who require emergency treatment will prove to be uninsured and thus almost assuredly unable to pay for the services provided; this is factored into their financial calculations, and it raises the cost for everyone: individuals are charged higher prices for medical treatment, employers spend more to provide workers with health insurance, and the state’s healthcare expenses continue to grow.
Alternatively, the Act’s defenders argue that the individual mandate is allowable under Congress’ power to levy taxes. The penalty charged to uninsured individuals is comparable to a tax (when the law was pushed through Congress everyone was careful to clarify that the penalty was not a new tax-- for political reasons--, but this isn’t necessarily dispositive). Indeed, one could say that the assertion that the mandate forces citizens to purchase health insurance is incorrect: they have the choice of either maintaining an insurance policy or paying a penalty.
During oral arguments, supporters claimed that the only thing distinguishing Obamacare from Social Security was the involvement of private companies. If we accept that the government could act as insurer itself and require everyone to pay a tax, why can’t it provide insurance through private intermediaries who receive through premiums the money that would otherwise be paid to the government?
Judicial analysts were taken by surprise when the Supreme Court showed that it was seriously considering invalidating the law. Indeed, several conservative judges in the lower courts confidently ruled that Obamacare presented no constitutional problems.
Judging by the questions asked during oral arguments-- not always a good indication of the opinion justices will arrive at in the Court’s final decision--, Justices Scalia and Alito seem most antagonistic to the individual mandate. Justice Thomas never participates in oral arguments, but it is safe to assume that he will vote to strike down the law (Thomas actually wrote a concurring opinion in Lopez stating that the Court should return to a Hamiltonian interpretation of the commerce clause which would preclude federal laws regulating mining or agriculture). On the left, Justices Sotomayor, Kagan, Ginzburg and Breyer all appear ready to uphold Obamacare; I would say that, during the hearings, Sotomayor was the most vocal. Observers have stated that Justice Kennedy (the notorious swing voter who leans slightly to the right) and-- to a much lesser extent-- Chief Justice Roberts might be on the fence although both appear more likely than not to declare the mandate unconstitutional.
It seems to me that there are three types of questions justices raise during oral arguments. First, there are the antagonistic questions from justices who clearly oppose the position being put forward-- either pointing out holes in the party’s argument or else just arguing with counsel (on occasion the justices even argue among themselves). A favorable assessment of the motivation behind these questions would be that the justice is trying to persuade his colleagues; a more cynical person might think they just serve to put the attorney on the spot or to satisfy the justice’s own vanity. Next, there are the softball questions thrown out by sympathetic justices who often clarify the attorney’s line of reasoning or even put forward their own arguments supporting the party’s position. And finally, the third type is comprised of those questions which represent legitimate requests for additional explanations-- perhaps asking the attorney how he believes the legal interpretation he is advancing would apply to a given hypothetical situation. This third type can appear to be the least common; it can also be hard to distinguish the sincere questions from the antagonistic.
If the Supreme Court ultimately strikes down the individual mandate (and this appears to be a definite possibility), it would then need to decide whether the remainder of the law can stand without this key provision. It is hard to question the fact that the healthcare system envisioned under the new law could not function the way lawmakers intended if participation is not obligatory. As the justices put it, all the healthy 23 year olds can choose to opt out of purchasing health insurance: not only do insurers need regular payments from healthy individuals to offset the cost of treatment for the infirm (this is how insurance works), but the small percentage of these 23 year olds who end up seriously injured in an accident or diagnosed with some serious disease will rack up medical bills they cannot pay thus burdening healthcare providers-- raising the cost of treatment for everyone and the cost of the Affordable Care program for taxpayers.
Photo of National Archives, Washington DC taken by Meeg on March 11, 2012 using Hipstamatic | http://superfluae.blogspot.com/2012/06/supreme-court-obamacare-debate.html | 13 |
14 | Z-Matrix Main Page
First year chemistry curriculum concepts
Second year chemistry curriculum concepts
Cartesian Converter Materials
Z-matrix to Cartesian Converter
Cartesian Converter Example
Help Instructions for the Z-matrix to Cartesian Converter
Glossary of Terms
SUCCEED's Computational Chemistry
Email the Group
When constructing a Z-matrix, you should follow these steps:
- Draw the molecule.
- Assign one atom to be #1.
- From the first atom, assign all other atoms a sequential number. When
assigning atoms, you must be careful to assign them in an order that is easy
to use. This will become clearer as you experiment with different molecules.
- Starting with atom #1, list the atoms you numbered, in order, down your
paper, one right under the other.
- Place the atom designated as #1 at the origin of your coordinate
system. The first atom does not have any defining measurements since it is at
- To identify the second atom, you must only define its bond length to the
first atom. Use the reference charts given.
- For the third atom, you must define a bond length to atom #1 and a bond
angle between atom #3 and atoms #1 and #2. (Bond angles are the angles between
- Remember that you can only use previously defined atoms when defining your
current atom. This means that you can not reference atom #7 when defining atom
- To identify atom #4 and all other atoms, you must include a bond length,
bond angle and a dihedral angle. (Dihedral angles are the angles between an
atom and the plane created by three other atoms.) This is done by using
neighboring atoms to the atom you are describing. Again, the reference charts
are helpful in locating bond lengths and angles.
Bond Length Hints:
- The bond length of each kind of bond varies very little from one
particular compound to another.
- Single bonds of first-row elements (C, N, O, F) to hydrogen are all about
- Single bonds between first-row atoms are all about 1.5 Å.
- Double and triple bonds are shorter: 1.2 to 1.3 Å in first-row
- Second-row, and higher, atoms (S, P, Cl, etc.) form correspondingly
Normal Bond Lengths (in angstroms)
C (triple)C 1.20
- Angles with all single bonds: 110 degrees
- Angles with a double bond: 120 degrees
- Angles with a triple bond: 180 degrees
- Angles with all single bonds: 120 degrees
- Angles with a double bond: 180 degrees
- Remember that you can ONLY use previously defined atoms to identify the
atom you are working on.
- Angles can be positive and negative to represent directions. If one atom
is going into the screen and another is coming out of the screen, one angle
should be defined as negative and the other as positive. It does not matter
which you chose to be positive or negative.
Z-Matrix Converter http://www.shodor.org/chemviz/zmatrices/babel/html. | http://www.shodor.org/chemviz/zmatrices/students/reference.html | 13 |
33 | In the tempestuous years of the American Revolution there were two giant steps that had to be taken by the Colonies through their representatives at the Continental Congress. The first step was to throw off the government of Britain which was accomplished through the signing of the Declaration of Independence, and the fighting of the revolutionary war. The second step was to fill the vacuum of government which the first step created.
The first attempt to establish a national government was through the adoption of the articles of confederation which were inadequate and a second attempt was made with the ratification of the Constitution for the United States of America. Contained in the Preamble of this Constitution is the clear concise statement of the purpose for the creation of this national government. "We, the people of the United States, in order to form a more perfect union, establish justice, insure domestic tranquility, provide for the common defense, promote the general welfare, and secure the blessings of liberty to ourselves and our posterity, do ordain and establish this Constitution for the United States of America." However, before many of the states would ratify the Constitution, they demanded that a declaration of the rights of the people be set forth in that same Constitution. This declaration of rights was attached as an appendix to the Constitution and became known as the Bill of Rights. These were the Ten Commandments that government could not violate, if it did, then the Constitution would be abrogated and the union automatically dissolved.
During this time there were two views of government expressed and followed by the early administrations. The one was the Federalist view whose advocate was Alexander Hamilton, the other was the Republican view whose leading spokesman was Thomas Jefferson. The principles of these two forces were in direct opposition to each other. The Federalist's position was the old monarchal principle of a strong central government, while the Republican's position was the new principle of the sovereignty of the colonies – THE STATES.
These Federalists made great advances under Washington's and Adam's administrations with the appointment of Alexander Hamilton (a Jew) as the Secretary of the Treasury. Jefferson's grounds of opposition to the Secretary of the Treasury were that Hamilton despised the republican form of government, was a monarchist in theory, and sought by administrative measures to subvert the constitution, and to gradually convert it into something like that of Great Britain under George the Third. Being Secretary of the Treasury Hamilton was able to contrive financial measures aimed at accumulating new powers in the hands of the Central Government. A part of this plan involved bribery of the legislature, that the votes by which it was in great part carried were recorded by a corrupt squadron of Representatives and Senators interested in public debt and bank scrip. In the ANAS Jefferson described Hamilton as "so bewitched and perverted by the British example as to be under thorough conviction that corruption was essential to the government of a nation."
Madison had been associated with Hamilton when the United States Constitution was drawn up, but soon after that came a separation between the two. Madison in his old age said to his friend, Nicholas P. Trist: "I deserted Colonel Hamilton, or rather Colonel Hamilton deserted me - in a word, the divergence between us took place from his wishing to administration, or rather to administer the Government, into what he thought it ought to be; while, on my part, I endeavored to make it conform to the Constitution as understood by the Convention that produced and recommended it, and particularly by the State conventions that adopted it." Since strong central government was the theme of the Federalists, it was necessary that the government always be expanding and growing. This would require ever increasing funds, and the Federalists knew that the populace would not bear the heavy burden of taxation unless they were made to think that it would only continue for a short time. The people would only agree to this temporary tax burden in order (1) to payoff the national debt, or (2) to support the national defenses in a time of war. Therefore, either the Federalists were increasing the national debt through deficit spending, or they were making preparation to involve America in a new foreign war.
The beginning of the 1800's saw the defeat of the Federalists in the legislative and executive branches of government with the election of Thomas Jefferson as President. The Federalists had entrenched themselves in the judicial branch of government for life and thereby dominated the Federal Courts. In spite of this the Republicans began to reorganize the national government. In Jefferson's inauguration speech on March 4, 1801 the theme of republicanism was simply and yet profoundly expressed. “With all these blessings, what more is necessary to make us a happy and prosperous people? Still one thing more, fellow-citizens, a wise and frugal government, which shall not take from the mouth of labour the bread it has earned." In December 1801 Jefferson sent a message to Congress which again clearly sets forth the aims of republicanism. “Considering the general tendency to multiply offices and dependencies and to increase the ultimate term of burden which the citizens can bear, it behooves us to avail ourselves of every occasion which presents itself for taking off the surcharge (internal taxation), that it never may be seen here that after leaving to labour the smallest portion of its earnings on which it can subsist, government shall itself consume the residue of what it was instituted to guard."
"A wise and frugal government" was Jefferson's main goal and for the first time the job of balancing the budget and paying off the national debt was undertaken, (go to top)
and Gallatine was appointed Secretary of Treasury with this awesome responsibility. Gallatine was so skillful in the accomplishment of this task that even though in 1803 customs produced $2 million less than in 1802, yet $15 million was provided for the Louisiana purchase, and all the needs of government were taken care of with an added $7 hundred thousand expenditure, the national debt being paid off, and all this was done without imposing a new tax. Republicanism was simplicity and economy in government.
Not only did Jeffersonian Republicanism aim at paying off the national debt, but also its goal was to keep America out of foreign wars. Neutrality was the watchword. President Jefferson wrote foreign ambassador William Short that America must keep out of European politics: "to be entangled with them would be a much greater evil than a temporary acquiescence in the false principles which have prevailed. Peace is our most important interest and a recovery from debt." Jefferson knew through experience that war meant (1) conscription of life, (2) conscription of wealth, (3) public bankruptcy, and (4) confiscation of private property by taxation or debasement of money. War was contrary to all moral and economic interest of civilization, and therefore contrary to the principles of republicanism.
Debasement of money was a problem that had plagued the colonies for the last 30 years. The people had experienced a series of depressions with inflation of prices brought on by the increased issuance of paper script. It was Thomas Jefferson who brought forth a sound money plan which was a complete break with the English system. Jefferson concluded that in order to avoid depressions caused by debasement of money, American money had to be hard money not worthless paper. This is expressed in the United States Constitution in Article 1 Section 8 which says, "The Congress shall have power to coin money, and to provide for the punishment of counterfeiting the current coin of the United States." Article 1 Section 10 even limits the power of the states in the matter of money by saying, "No state shall coin money; emit bills of credit; make anything but gold and silver coin a tender in payment of debts." Only by loose Federalist perversion of the Constitution could its clear wording be twisted to imply that the privately owned Federal Reserve Bank could print paper scrip and call it money. Its notes are not lawful money nor are they redeemable for lawful money. The Federalists tried to set up a National Bank (like the Federal Reserve) in 1790. Hamilton drew up the plan for the National Bank which Jefferson immediately opposed. Mr. Jefferson charged that "a system had been devised at the Treasury, and a series of laws passed, under which the states were being deluged with paper money instead of gold and silver. The poison had been injected into the veins of government; and the constitution was being changed into a very different thing from what the people thought they had submitted to. There had now been brought forward a proposition far beyond any ever yet advanced, on the decision of which would depend, whether we live under a limited or an unlimited government."
Not only were these republican principles firmly established during Jefferson's 8 years as president, but also during the 16 years of the following administrations of Madison and Monroe. Federalism was supposed to have died during this period, but instead it only with drew into its stronghold of the judiciary. Eternal vigilance for the republic was soon replaced by perpetual indolence and indifference and the American Republic died. This event transpired so slowly that it came without public observation. Through the process of time and the unceasing efforts of the judiciary, the cancerous Federalism consumed the host of healthy Republicanism. The American public was not aware of what was happening to its government. The Federal judges determined the success or failure of the lawyers who practiced in their courts, and only those lawyers who were patrons of federalism were allowed to wax rich. These wealthy Federalist lacky-lawyers were then able to gain key positions in government through the power of their money. Today the judiciary controls both the executive and legislative branches of government. In order to verify this fact find out how many positions in these branches of government are currently held by lawyers.
The Federalist judiciary through its Courts-Judges, and Servants of the Courts-Lawyers, has established a Federalist government in America. Compare the principles of Federalism with those of Republicanism and you will know beyond doubt that even though there is a republican party in name only, the Republic is dead.
Even the most naive can readily see that today’s government is not based upon republicanism, but totally on Federalism. However, this is not the time to mourn the death of the Republic, but rather to labor in the anticipation of its glorious resurrection. When the cup of Federalistic abuses has overflowed and can be contained no more, then the national public will arise from its perpetual sleep of death, and with the resurrection of eternal vigilance the Republic will rise from the dust of decay to a newness of life as YAHWAH God's Kingdom, "NOVUS ORDO SECLORUM", the NEW ORDER OF THE AGES. | http://www.truthfromgod.com/articles/death_of_republic.html | 13 |
22 | Will climate change help or hinder our efforts to maintain an adequate food supply for the increasing world population of the next century? Which regions are likely to benefit and which are likely to suffer food shortages and socioeconomic crises? Could the beneficial effects of increasing atmospheric carbon dioxide (CO2) on plants (the so-called "CO2 fertilization effect") counteract some of the negative effects of climate change? What types of adaptations and policies will be necessary to take advantage of the opportunities and minimize the negative impacts of climate change on agriculture? What will the cost of these adaptations and policies be?
To address these questions scientists from various disciplines have linked together climate, crop growth, and economic-food trade computer models. These multi-layered models are extremely complex and contain numerous assumptions about the physical, biological, and socioeconomic systems they attempt to simulate. Nevertheless, they represent the most comprehensive analyses we have at present. They can be useful to policymakers, particularly if there is an educated appreciation for the level of uncertainty inherent in their projections. Before presenting model outcomes, we will first review some fundamental aspects of what we know and don't know about how crop plants respond to temperature and increases in atmospheric CO2.
Temperature Effects on Plants
Most plant processes related to growth and yield are highly temperature dependent. We can identify an optimum temperature range for maximum yield for any one crop. Crop species are often classified as warm- or cool-season types. The optimum growth temperature frequently corresponds to the optimum temperature for photosynthesis, the process by which plants absorb CO2 from the atmosphere and convert it to sugars used for energy and growth. Temperature also affects the rate of plant development. Higher temperatures speed annual crops through their developmental phases. This shortens the life cycle of determinate species like grain crops, which only set seed once and then stop producing. Figure 1 illustrates the temperature effects on photosynthesis and crop growth duration. It shows that for a variety currently being grown in a climate near its optimum, a temperature increase of several degrees could reduce photosynthesis and shorten the growing period. Both of these effects will tend to reduce yields.
The particular crop varieties currently being grown in major production areas are usually those best-adapted to the current climate. A significant increase in growing season temperatures will require shifts to new varieties that are more heat tolerant, do not mature too quickly, and have a higher temperature optimum for photosynthesis. Developing such varieties should be possible for many crop species, but there are limits to what can be accomplished through plant breeding and modern genetic engineering approaches. In many cases traditional crops will have to be abandoned for new crops better suited to the new environment. For farmers in very cool regions, where the current climate limits their crop options, global warming will be mostly a benefit, giving them the opportunity to grow a wider range of crops and long-growing-season, high-yielding varieties.
Some plant species require a cold period before they will produce flowers and a harvestable product. The process, called vernalization, tends to have very narrow temperature and duration boundaries. Vernalization of winter wheat, for example, requires temperature to be between 0 and 11°C (32 and 52°F), with an optimum near 3°C (37°F) for a period of 6 to 8 weeks. Production of the seed of many biennial vegetable crops has similar requirements. Even a minor climate shift of 1-2°C could have a substantial impact on the geographic range of these crops.
Temperature Effects on Livestock
Climate change will affect livestock production indirectly by its impact on the availability and price of animal feed. Farm animals are also directly affected by temperature. Figure 2 illustrates that animal species differ in their temperature optimum range. Young animals have a very narrow and specific temperature optimum. A rise in temperatures in regions currently near the threshold of the optimum range could be detrimental to production. Construction and maintenance of controlled environment facilities to house farm animals is costly and will not be a viable option for many.
Carbon Dioxide (CO2) Effects on Plants
The debate over whether CO2 and other greenhouse gases are warming the planet continues, but few question the fact that atmospheric CO2 is increasing exponentially and will likely double (to 700 parts per million (ppm)) within the next century. This has a potential beneficial effect on the Earth's plant life because plants take up CO2 via photosynthesis and use it to produce sugars and grow. If this "CO2 fertilization effect" is large, it could significantly increase the capacity of plant ecosystems to absorb and temporarily store excess carbon. It could also lead to significant increases in crop productivity.
CO2 and photosynthesis
The biochemistry of photosynthesis differs among plant species, and this greatly affects their relative response to CO2. Most economically important crop and weed species can be classified as either a C3 or C4 type, the names referring to whether the early products of photosynthesis are compounds with three or four carbon atoms. It has been well known for many years that the C3 photosynthetic pathway is less efficient than the C4 pathway. Because of this, C3 plants benefit much more from increases in CO2 than C4 plants (Fig. 3). Over 90 percent of the world's plant species are the C3 type, including wheat, rice, potato, bean, most vegetable and fruit crops, and many weed species. However, the C4 group includes the important food crops, maize, millet, sugarcane, and sorghum, as well as many pasture grasses and weed species. These C4 crops will benefit little from a CO2 doubling.
The CO2 response curves shown in Figure 3 are typical of results from experiments where plants are grown under optimal conditions and current ambient CO2 concentration (350 ppm). The magnitude of the CO2 response often changes when plants are acclimated to a high CO2 environment (e.g., 700 ppm). Usually the beneficial effects decline with long-term exposure to high CO2, but in some instances they increase. Identifying the mechanisms of both upward and downward photosynthetic acclimation to CO2 is an important area of current research. The greatest benefits from CO2 tend to occur when plants are able to expand their "sink capacity" for the products of photosynthesis by, for example, producing more flowers and fruit when grown at high CO2. When genetic or environmental factors limit growth and sink capacity, sugars build up in the leaves, a negative feedback on photosynthesis occurs and the benefits from elevated CO2 become minimal. This explains why maximum CO2 benefits usually require an optimum environment and increased inputs of water and fertilizers. A research priority in the future will be breeding for genotypes that take full advantage of increases in CO2.
CO2 and crop water use
Another important direct effect of high CO2 on plants (both C3 and C4 species) is a partial closure of the small pores, or stomates, of the leaves. This restricts the escape of water vapor from the leaves (transpiration) more than it restricts photosynthesis. Some have suggested that this will moderate the increase in crop water requirements anticipated to occur in a global warming scenario. However, significant water savings have seldom been observed in experiments designed to test the hypothesis. Although plants grown at high CO2 transpire less water per unit leaf area, their leaves are frequently larger and there are more of them so that whole plant water use is similar to or greater than plants grown at normal CO2 concentrations.
The stomatal response to CO2 does appear to have some beneficial impact under water-limited conditions. Several studies have found that the relative beneficial effect of a CO2 doubling on growth is greater under mild water stress conditions than when water supply is optimum. The absolute benefit from CO2 is nevertheless maximum when water is not limiting growth.
CO2 and crop yield
Most of our information regarding the yield response to CO2 is based on controlled environment experiments, where plants were well supplied with water and nutrients, temperatures were near optimum, and pressure from weeds, disease and insect pests were nonexistent. Under such optimum conditions a doubling of CO2 (e.g., from 350 to 700 ppm) typically increases the yield of C3 crops by 20 - 35%. While this describes the average, there are reports in the literature of lower yield responses in some slow- growing winter vegetables such as cabbage, and reports of higher yield responses in some fast-growing indeterminate species such as cotton and citrus. Maize and other C4 crops typically have yield increases of less than lO% with a CO2 doubling, as might be expected from their photosynthetic response (Fig. 3).
Several recent reviews have emphasized that when plants grow in a field situation, the optimum conditions required for full realization of the benefits from CO2 enrichment are seldom, if ever, maintained. This is particularly true for natural ecosystems and for agricultural systems in developing nations where irrigation, fertilizer, herbicides, and pesticides are not available or prohibitively expensive. Even in developed countries, the increase in inputs sometimes necessary for maximum CO2 benefit may not be cost-effective or may be limited by concerns regarding resource conservation or environmental quality.
The specific temperature range for realization of a positive CO2 effect varies, but for most crops the beneficial effects become minimal at temperatures below about 15°C (59°F). This has important implications for temperate regions of the world where, despite global warming, average temperatures during early and late portions of the growing season will be too cool to expect much benefit from a CO2 doubling. Where excessively high temperatures (e.g. above 38°C or 100°F) occur, flowering and pollination of many crop species will be impaired. Yields will be very low regardless of atmospheric CO2 levels.
CO2 and weed, disease, and insect pests
An increase in atmospheric CO2 is just as likely to increase the growth rate of weed species as cash crops. To date, most of our information regarding crop response to CO2 is based on experiments in which competition from weeds was not a factor. Important C4 crops, such as maize and sugarcane, may experience yield reductions because of increased competition from C3 weeds. However, broad generalizations regarding CO2 enrichment effects on crop-weed competition provide little insight into the specific weed control challenges that farmers will have to face in the coming century. The site-specific mix of weed and crop species, and the relative response of each of these species to environmental conditions in the future CO2-rich world, will determine the economic outcome for both farmers and consumers.
Recent research examining the effect of elevated CO2 on insect damage has found that leaf-feeding insects often must consume more foliage to survive on high CO2-grown plants, presumably because the leaves tend to have a lower protein concentration. Natural selection would tend to favor the evolution of insect genotypes that consume more plant material more rapidly. To combat this, farmers may be required to use more pesticides.
The climate changes that result from increased atmospheric CO2 concentrations will undoubtedly influence the geographic range of insect and disease pests. Warmer temperatures in high latitude areas may allow more insects to overwinter in these areas. Also, crop damage from plant diseases is likely to increase in temperate regions because many fungal and bacterial diseases have a greater potential to reach severe levels when temperatures are warmer or when precipitation increases.
Model Projections of Climate Change Impact on Food Supply
Several groups have attempted to model the impact of climate change on crop yields and food supply. A particularly comprehensive approach, involving the collaboration of many scientists worldwide, is that described by Rosenzweig and Parry (1993, 1994). They linked global climate model outputs with crop growth models, and then used the yield projections as inputs into a world food trade model. The analysis considered climate uncertainties by comparing results from three different general circulation models (GCMs), those from the NASA Goddard Institute for Space Studies (GISS), the Geophysical Fluid Dynamics Laboratory (GDFL) and the UK Meteorological Office (UKMO). Two levels of farmer adaptation to climate change, and the potential direct effects of CO2 were also considered.
Table 1 shows estimated yield changes for several important cereal crops. It is clear that regardless of GCM used, climate change had a substantial negative effect on yield unless a beneficial effect from elevated CO2 is assumed. With a CO2 fertilization effect, the impact of climate change on wheat and soybeans shifts from negative to positive for the GISS and GDFL climate scenarios, and rice yields also significantly improve. Maize yields remain substantially negative because this C4 crop is assumed to benefit little from an equivalent CO2 doubling.
The CO2 effect was incorporated into the crop models as a simple multiplier, increasing predicted yields of soybeans, wheat, rice, and maize by 34, 22, 19, and 7%, respectively. To obtain these values, the modelers had to rely on the published literature, which, as discussed above, is dominated by experiments conducted under optimum conditions. The CO2 effect multiplier was applied without taking into account the likely profound effect of regional, seasonal, and daily temperature variations on the magnitude of CO2 response. The predicted yields with CO2 effects in Table 1, therefore, probably overestimate the yield response to CO2 in many cases. On this point the authors themselves warn "... there is always uncertainty regarding whether experimental results will be observed in the open field conditions likely to be operative when farmers are managing crops." (Rosenzweig and Parry, 1993. p. 92).
There was considerable regional variation in yield response to climate change as indicated in Table 2 which shows predicted wheat yields for several countries. Yield increases in the highest latitude locations, for example in parts of Canada and the former USSR, were due to an extension of the frost-free growing season and improved temperatures for productivity. The optimistic assumptions regarding effects of CO2 also caused substantial yield benefits in all areas. Decreased yields were associated with faster plant development rates that shortened the growing period, decreases in water availability, and poor vernalization of winter wheat.
Many developing nations are in tropical or subtropical areas where a global warming will be of little benefit, and often a detriment, to crop yields. Also, these areas frequently have less capacity for irrigation, which becomes a more serious drawback in some climate scenarios where rainfall does not meet the increasing crop water requirements. These factors, combined with a relatively high population growth rate, will tend to increase the probability of food shortages in many developing nations. Figure 4 illustrates the large discrepancy in predicted cereal production for developed vs. developing nations. This particular analysis assumed direct positive CO2 effects, and "Level 1" adaptation to climate change by farmers, which included shifts in planting date, additional irrigation in areas with existing irrigation capacity, and changing to better adapted crop varieties from the existing pool of varieties available. Figure 5 shows the predicted increase in risk of world hunger with climate change and various levels of farmer adaptation. Adaptation Level 2 included development of new irrigations systems, increase in fertilizer application, and development of new varieties. This analysis shows that only if we assume Level 2 farmer adaptation capacity and an optimistic beneficial effect of elevated CO2 on yields are the risks of hunger likely to not increase beyond what we would expect without climate change.
Most analyses have concluded that although there will be both winners and losers within the U.S. agricultural sector, overall productivity is not likely to decline to the point of threatening national food security unless climate change is severe. The magnitude and direction of the predicted impact on U.S. agriculture varies depending on assumptions about climate and plant response to CO2. For example, the comprehensive study by Adams et al. (1990) predicted a 9% increase in production of field crops (included wheat, soybean, sorghum, cotton, oats, maize, hay and silage) using the GISS climate scenario, or a 20% yield decrease with the GFDL climate model. The GFDL model predicts slightly warmer, drier conditions for some regions of the U.S. than the GISS model. This assessment assumed an optimistic 20 - 35% yield boost due to the CO2 fertilization effect. The economic component of their analysis predicted that for the more severe GFDL climate scenario, U.S. consumers would face moderately higher food prices, but the major impact would be on exports. Both climate scenarios predicted significant reductions in cropped acreage in the Southeast, Southern Plains, and Northeast, and increases in the Northern Plains, Great Lakes, and Rocky Mountain regions.
Can Farmers Adapt to Climate Change?
Farmers in developing nations will be least able to adapt to climate change because of a relatively weak agricultural research base, poor availability of inputs such as water and seed of new varieties, and inadequate capital for making adjustments at both the farm and national level.
In contrast, the U.S. and many other developed nations have a strong agricultural research base, abundant natural resources for flexibility in cropping patterns, and capital available to pay for adaptations and buffer negative economic effects during transition. For this reason many are optimistic that farmers in developed nations will be able to take advantage of opportunities and minimize negative effects associated with climate change.
Adapting to climate change will be costly, however. Costs at the farm level will include such things as increased use of water, fertilizer and pesticides to maximize beneficial effects of higher CO2, and investment in new farm equipment and storage facilities as shifts are made to new crop varieties and new crops. Costs at the national level will include substantial diversion of agricultural research dollars to climate change issues, and major infrastructure investments, such as construction of new dams and reservoirs to meet increased crop water requirements. Environmental costs associated with agricultural expansion into some regions could include increased soil erosion, increased risk of ground and surface water pollution, depletion of water resources, and loss of wildlife habitat.
Developed as well as developing nations must be prepared to deal with the citizens in those regions negatively impacted by climate change. Regardless of capital availability, agricultural economies in some areas will collapse due to factors such as excessively high temperatures, severe pest pressure, lack of locally adapted varieties or poor markets for adapted crops. As climatic zones shift, there will be some cases where those zones with the best climate for crops will not have good soils or available water. It would be wise to begin examining national policies for their ability to handle these climate change issues. The CAST report on preparing the U.S. for climate change (CAST, 1992) emphasized the need for climate change-related agricultural research and suggested modifying existing policies to encourage more flexible land use, more prudent use of water resources, and freer trade.
The three major uncertainties regarding impacts of climate change on agriculture are: (1) the magnitude of regional changes in temperature and precipitation; (2) the magnitude of the beneficial effects of higher CO2 on crop yields; and (3) the ability of farmers to adapt to climate change. Current assessments suggest that, in all three categories, developed nations will frequently be at an advantage compared to most developing nations.
With regard to climate, many developed nations are in mid- to high-latitude locations, where warmer temperatures may improve crop yields by extending the growing season. In contrast, many developing nations are in subtropical and tropical areas, where global warming may lead to excessively high temperatures and reduce yields.
Many crop models account for the CO2 effect by globally increasing yields of C3 crops by 20 - 35%, which assumes a near optimum growth environment. Field conditions are seldom optimum, but farmers with access to adequate water, fertilizer, and other inputs will likely gain more from a CO2 doubling than farmers who do not have these resources. Temperatures may become too hot, or be too low despite global warming, for beneficial effects of CO2 in some areas. It should also be noted that those farmers producing C4 crops, such as maize, sorghum, millet and sugarcane, will see very little benefit from higher CO2, and at the same time their crops will face increased competition from C3 weeds.
Farmers in developed nations will have an obvious advantage in adapting to climate change because of a strong agricultural research base and capital available for farm inputs and infrastructure investments. Food security in these countries may not be directly threatened, and overall productivity may even increase, if global warming is moderate and the frequency of severe weather events does not increase However, adapting to climate change will be costly. Even within countries that benefit at the national level, climate change is likely to have negative economic and environmental impacts in some areas as production zones shift.
Studies integrating climate, crop, and food trade models suggest that a moderate climate change may have only a small impact on world food production because reduced yields in some parts of the globe are offset by increased yields in others. Despite this, severe food shortages are likely to occur in some developing nations because of trade and local climate and resource constraints. This will have political consequences at the global level. Climate change will likely lead to an increase in world hunger unless population growth rates in developing nations are much smaller than currently projected, and farmers obtain adequate assistance. Adapting to climate change with minimal economic, social, and political upheaval will require a coordinated international effort to deal with the many serious consequences of climate change on agriculture.
Adams, RM, C Rosenzweig, RM Peart, JT Ritchie, BA McCarl, JD Glyer, RB Curry, JW Jones, KJ Boote, LH Allen, Jr. 1990. Global climate change and US agriculture. Nature 345:219-224.
Council for Agricultural Science and Technology (CAST). 1992. Preparing U.S. Agriculture for Global Climate Change. Task Force Report No. 119. CAST, Ames, Iowa. 96 pp..
Gates, DM. 1993. Climate Change and Its Biological Consequences. Sinauer Assoc. Inc., Sunderland, Massachusetts, 280 pp..
Kaiser, HM, SJ Riha, DS Wilks, DG Rossiter, R Sampath. 1993. A farm-level analysis of economic and agronomic impacts of gradual climate warming. Amer. J. Agr. Econ. 75:387-398.
Kimball, BA. 1983. Carbon dioxide and crop yield: An assemblage and analysis of 430 prior observations. Agron. J. 75:779-787.
Melillo JM, TV Callaghan, FI Woodward, E Salati, SK Sinha. 1990. Effects on ecosystems. In: Climate Change: The IPCC Assessment, JT Houghton, GJ Jenkins and JJ Ephraums (eds.). Cambridge Univ, Press, Cambridge. pp 283-310.
Parry ML. 1990. Climate Change and World Agriculture. Earthscan Ltd. London. 157 pp..
Rosenzweig, C and ML Parry, 1993. Potential impacts of climate change on world food supply: A summary of a recent international study. In: Agricultural Dimensions of Global Climate Change., HM Kaiser and TE Drennen (eds.). St, Lucie Press, Delray Beach, Florida. pp. 87-116.
Rosenzweig C and ML Parry. 1994. Potential impact of climate change on world food supply. Nature 367:133-138,
Wolfe, DW 1994. Physiological and growth responses to atmospheric carbon dioxide. In: Handbook of Plant and Crop Physiology. M Pessarakli (ed.) Marcel Dekker, Inc., New York. pp. 223-242.
Wolfe, DW and JD Erickson. 1993. Carbon dioxide effects on plants: uncertainties and implications for modeling crop response to climate change. In: Agricultural Dimensions of Global Climate Change., HM Kaiser and TE Drennen (eds.). St. Lucie Press, Delray Beach, FL, pp. 153-178.
Go To Health Impacts | http://gcrio.org/USGCRP/sustain/wolfe.html | 13 |
14 | Salt marshes are vegetated coastal wetlands that are the subject of regular tidal inundation (Mitsch and Gosselink, 2000). They are typically found in low energy areas within embayments or along protected coasts. The protected nature of salt marsh systems allow for the trapping and accumulation of sediment by salt marsh vegetation. Sources of sediment include upstream runoff and coastal erosion. Salt marsh vegetation is organized along a gradient depending on species tolerance for saline and anoxic conditions (Mitsch and Gosselink, 2000). The low marsh, which receives tidal flow twice daily, is generally dominated by salt marsh cord grass (Spartina alterniflora). The high marsh, which receives less regular tidal influence, is comprised of a variety of species including salt marsh hay (Spartina patens), black grass (Juncus gerardii), and spike grass (Distichlis spicata) (Nixon, 1982; Bertness, 1991). Salt pannes, pools and tidal creeks may also characterize salt marsh habitats. Salt pannes are bare, exposed, or water-filled depressions in a salt marsh (Mitsch and Gosselink, 2000).
|Figure 1: Salt marsh on Thompson Island|
Salt marshes are one of the most productive ecosystems on earth. As such, they provide real and measurable environmental, social, and economic benefits. Salt marshes act as nurseries for commercially and recreationally important shell and fin fisheries (Cruz, 1973); provide habitat and food sources for birds and other wildlife; protect coastal areas from flooding and storm surges; and provide educational and recreational opportunities. Salt marshes also play a role in estuarine health by aiding in nutrient attenuation and cycling (Welsh, 1980); water quality improvement; shoreline stabilization; and mitigation for climate change and sea level rise (Gulf of Maine Council, 2008). Especially in urban areas where stormwater runoff can have high concentrations of pollutants and nutrients, salt marshes, like those found in the Boston Harbor region, absorb nutrients as they enter the estuary. These urban oases also offer critical refuges for humans and animals.
Salt marsh habitats may play a critical role in protecting coastal areas from the potential impacts of sea level rise. If able to migrate and adapt unimpeded, salt marshes may lessen the adverse impacts associated with sea level rise including increases in coastal flooding, storm surges, and erosion. As sea levels rise, a healthy salt marsh is more likely to capture sediment and keep pace with sea level rise. However, salt marshes that are degraded either with invasive species, a history of ditching, or have been cut off from sediment sources and natural tidal flows, may not be able to effectively respond to sea level rise (Kennish, 2001). Additionally, where hardened structures are built to the salt marsh edge, the salt marsh does not have the space available to migrate inland as sea levels rise. In situations where salt marshes are degraded or lack the space to migrate landward, the ecosystem functions they provide including protection against sea level rise can be greatly diminished or lost (Gulf of Maine Council, 2008).
Salt marshes are found in low energy waters along coasts, embayments, and rivers where tidal influence is present. Because salt marsh species require tidal inundation but cannot survive if constantly submerged, the seaward edge of salt marshes is marked by the low tide line and on the landward edge by the highest high tide line. Salt marsh habitats are found in highly specific ecological locations, thus their geographical extent can be diminished by coastal development, changes in sediment transport, poor water quality, and sea level rise.
Historically, salt marshes ringed the Boston Harbor region and extended well into the Saugus, Mystic, Charles, and Neponset Rivers (Carlisle et al., 2005). Now only a fraction of those historic marshes remain. It is estimated that salt marsh loss in the Boston Harbor region is close to 81% since pre-colonial times. This loss is largely due to placement of fill but is also a result of salt marsh ditching and the restriction of adequate tidal inundation (Reiner, 2011).
Figure 2: Historic USGS topographic map (c. 1900) overlayed with the current extent
of salt marsh in Rumney Marshes (MassDEP wetlands datalayer)(2009)
In 2006, the Office of Coastal Zone Management (CZM) with the US Fish and Wildlife Service and the University of Massachusetts described the changes in estuarine marsh over time by comparing maps and aerial photography of Boston Harbor from four time periods (1893, 1952, 1971, and 1995). The study showed great loss in estuarine marsh between 1893 and 1952 and between 1952 and 1971 (Carlisle et al., 2005). The periods of salt marsh loss coincide with population booms and city expansion. Estuarine marsh acreage has continued to decrease since 1971 but at a slower rate. The EPA performed a similar analysis of salt marsh loss focusing on the Rumney Marshes in Revere and Saugus. The EPA study showed a 40% loss in the Rumney Marshes salt marsh extent since 1800 (Carlisle et al., 2005).
Figure 3: Current extent of salt marsh in Boston Harbor Region (MassDEP 2009)
The integrity of our nation's coastal ecosystems is exposed to many threats, many of which are associated with their close proximity to population centers and development. In a national study, NOAA reported that 53 percent of the US population lives in coastal counties, but these coastal counties make up only 17 percent of the country's land area (excluding Alaska)(Crossett et al., 2004). Impacts from coastal development on salt marshes include an increase in nutrient runoff and invasive species, a lack of sufficient tidal flow, and the loss of salt marsh due to filling. This national picture is reflected in the Boston Harbor region as much of its salt marsh is degraded.
The degradation of salt marshes by human impacts comes in various forms. Although illegal now, historically, salt marshes were filled to form upland. Filling is largely attributed to the dramatic loss of salt marsh in the Boston Harbor region (Bromberg and Bertness, 2005). Historically, salt marshes were also drained or diked to create farmland. Present day causes of salt marsh loss are most commonly due to indirect impacts of coastal development. Undersized culverts installed beneath roads and railways restrict natural tidal hydrology. The lack of natural tidal hydrology can lead to degraded systems that are more susceptible to colonization by invasive species such as perennial pepperweed (Lepidium latifolium) and common reed (Phragmites australis). Also the encroachment of development abutting salt marshes can cause an unnatural and unsustainable input of freshwater runoff, nutrients, sediments, and toxins degrading the system. Freshwater runoff will lower the salinity of the system allowing for the potential colonization of invasive species while an increase in nutrient inputs can adversely affect the health of wildlife in the marsh (Gulf of Maine, 2008).
As documented by a number of studies, the condition of salt marshes in the Boston Harbor region suffers many of these fates. In the Atlases of Tidal Restrictions for the North and South Shores, CZM identified and prioritized salt marshes which were cut off from tidal flow. These salt marshes do not receive adequate tidal flushing due to under sized or absent culverts. The Atlases identify a total of 72 salt marshes in the Boston Harbor region, which are degraded due to lack of sufficient tidal flow. The University of Massachusetts also assessed salt marsh condition through an ecosystem based approach that assessed the ecological integrity of lands and waters in the state under the Conservation and Assessment Prioritization System (CAPS) project. CAPS developed an Index of Ecological Integrity (IEI) to identify and prioritize areas for land and habitat conservation efforts. Ecological integrity is defined as the ability of an area to support biodiversity and the ecosystem processes necessary to sustain biodiversity over the long term (UMass Amherst CAPS website). Maps depicting the IEI for the Boston Harbor region show areas of relatively higher ecological integrity associated with some of the Boston Harbor Islands, at the mouth of the Neponset, Fore, and Back Rivers and in portions of Rumney and Belle Isle Marshes. However, as a whole the Boston region has a lower IEI than many other coastal areas of the state including the Upper North Shore, Cape Ann and much of Cape Cod. Maps depicting IEI by town may be found at the bottom of the CAPS website. It is no surprise that these assessments depict salt marsh habitat within the Boston Harbor region as degraded and with a relatively low ecological value.
Protection and Restoration Potential
The dramatic loss of salt marsh in the Boston Harbor region is an indicator of restoration potential. Some of the historically filled marshes may be candidates for fill removal restoration projects. Other restoration opportunities lie in the replacement of undersized culverts and water conduits where development crisscrosses marshes without providing adequate hydrologic connections to the ocean. The presence of invasive species also creates an opportunity for salt marsh restoration since invasive plants such as perennial pepperweed and common reed significantly alter the functional value of salt marsh habitat. There also may be the opportunity to protect upland buffers between coastal development and salt marshes so that these systems may migrate landward with sea level rise (Gulf of Maine, 2008).
Federal, state, local, and non-profit organizations are actively involved in salt marsh restoration in Massachusetts. Consequently, there is a great wealth of restoration plans from which to draw for assessing restoration opportunities. Salt marsh restoration planning in the Boston Harbor region includes the development of site-specific restoration plans, the identification of tidally restricted marshes, and the assessment of ecosystems for restoration opportunities. Salt marsh restoration projects identified include the removal of fill, the reintroduction or increase of tidal flow, or the removal of invasive species. Examples of agencies or planning documents that identify salt marsh areas restoration opportunities in the Boston Harbor Region are listed below. These reports can also be viewed in the Boston Harbor Habitat Atlas map viewer.
- North Shore Atlas of Tidally Restricted Marshes (1996)
This Atlas identifies salt marshes on the North Shore that do not receive adequate tidal flow due to the undersizing or absence of culverts. The North Shore Atlas was the first of the Tidal Restriction Atlases created for the Massachusetts coast. The methodology was improved upon for the other Atlases.
- Restoring Wetlands of the Neponset River Watershed – A Watershed Wetlands Restoration Plan (2000)
This restoration plan is one of the oldest in the Boston Harbor region and is the only one that focuses on the Neponset River ACEC. The document serves as a good starting point for selecting restoration opportunities in this watershed.
- South Shore Atlas of Tidally Restricted Marshes (2001)
This Atlas identifies 30 salt marshes on the South Shore and within the Boston Harbor region, which do not receive adequate tidal hydrology due to the undersizing or absence of culverts.
- Rumney Marshes Area of Critical Environmental Concern Restoration Plan (2002)
This restoration plan identifies and provides basic information on approximately 30 restoration projects within the Rumney and Belle Isle Marshes. The projects are not prioritized or ranked.
- EPA's Rumney Marshes identified restoration areas
This geospatial compilation by EPA Region 1's Wetland Division identifies salt marsh restoration opportunities within the Rumney Marsh system. The data calls out fill removal areas and poorly functioning tide gates as well as areas of historic fill, completed projects, and completed projects which need additional attention. The data serves as the most current restoration information available for the Rumney Marsh ecosystem. The data will be available in the Atlas map viewer soon.
- NOAA's Restoration Atlas
The NOAA Restoration Atlas identifies seven projects within the Boston Harbor region. The Atlas displays information relating to project status, acreage, partners, funding and timeline.
- Division of Ecological Restoration's Active Habitat Restoration Priority Projects list
DER's Priority Project list currently includes six projects in the Boston Harbor region. This list of projects is the result of a peer reviewed process. Project proponents submit proposals which are evaluated based on a series of criteria including the project's public and ecological benefit, clarity of goals, level of support, cost, and size, among others.
- Logan Airport Runway Safety Area Improvement Project FEIR (2011)
The process of selecting salt marsh mitigation for the Logan Airport Runway improvements serves as the most recent survey of salt marsh restoration opportunities in the region. However, criteria such as size and land ownership used to choose the final mitigation may have eliminated viable restoration opportunities.
|Figure 4: Salt marsh restoration sites selected for Logan Runway Safety Area Improvement Project|
The most recent review of salt marsh restoration opportunities in the Boston Harbor region was associated with the mitigation study for the Logan Runway Safety Area Improvement Project. Restoration opportunities were sought because this project required mitigation for impacts to salt marsh. The initiative surveyed existing restoration plans, including those listed here, to find sites that matched the mitigation criteria (i.e., sites must be greater than 1 acre). The project identified 40 sites in the Boston Harbor Region that fit the specific criteria. Two sites were ultimately chosen as mitigation for the Logan Airport project, the other potential mitigation sites identified represent opportunities for restoration that have gone through some vetting by the salt marsh restoration community.
Protection and restoration opportunities in the Boston Harbor region include a range project types. For a habitat which already boasts a long list of restoration partners in the Boston Harbor region, the protection and restoration opportunities identified here seek to support and coordinate with the existing initiatives by others.
- Review existing restoration plans. As noted above there are many planning documents which identify salt marsh restoration opportunities for the Boston Harbor region. However, these plans were created across many years, for different entities and with differing purposes. A formal and systematic review of these plans would allow for a true comparison of the opportunities. A proposal will be developed for a comprehensive survey of the restoration opportunities. The survey will evaluate restoration projects based on criteria including restoration acreage, complexity, and cost. Rapid site assessments performed on a subset of the sites will provide the basis for a prioritization of restoration opportunities in the region.
Investigate the strategic and prioritized removal of the I-95 berm. The I-95 berm, which crosses Rumney Marsh, represents the largest opportunity for fill removal restoration in the Boston Harbor region in terms of restored acreage and habitat function. The long-term goal is to maximize the removal of the berm thereby allowing for salt marsh restoration. A management plan that addresses the many competing uses of the berm for salt marsh restoration, flood control, passive and active recreation, visual barrier, and source of beach nourishment will be developed.
Identify local projects. Through engaging local planning departments, watershed associations, Departments of Public Works (DPW) and Conservation Commissions, smaller salt marsh restoration projects may be identified. These opportunities could include restoration of smaller salt marshes (i.e. Belle Isle Marsh) or working with DPWs to reduce direct and indirect (i.e., stormwater runoff) impacts when planning for road repairs, culvert replacement, and other maintenance activities.
Protect salt marsh for sea level rise. Recent studies have assessed the potential impacts of sea level rise in the Boston Harbor region (e.g. The Boston Harbor Association). Because salt marsh habitat provides a critical buffer between ocean and upland areas, understanding the ability of salt marsh habitat to migrate as sea level rises would be valuable. Existing sea level rise studies will be compared with mapped salt marsh areas to identify locations where salt marsh migration may be possible. These areas may then be evaluated for conservation, protection from development, and restoration or enhancement opportunities.
Bertness, M.D. 1991. Zonation of Spartina patens and Spartina alterniflora in a New England Salt Marsh. Ecology 72: 138-148.
Carlisle, B.K., R.W. Tiner, M. Carullo, I.K. Huber, T. Nuerminger, C. Polzen, and M. Shaffer. 2005. 100 Years of Estuarine Marsh Trends in Massachusetts (1893 to 1995): Boston Harbor, Cape Cod, Nantucket, Martha's Vineyard, and the Elizabeth Islands. Massachusetts Office of Coastal Zone Management, Boston, MA; U.S. Fish and Wildlife Service, Hadley, MA; and University of Massachusetts, Amherst, MA. Cooperative Report.
Crossett, Kristen, M., Culliton, Thomas, J., Wiley, Peter, C., and Timothy R. Goodspeed. 2004. Population Trends Along the Coastal United States: 1980 - 2008. National Oceanic and Atmospheric Administration.
Cruz, A. A. de la. The role of tidal marshes in the productivity of coastal waters. Bulletin of the Association of Southeast Biology 20: 147-156.
Final Environmental Assessment/Environmental Impact Report for Boston-Logan International Airport Runway Safety Area Improvements Project. Prepared for the Massachusetts Port Authority. Prepared by Vanasse Hangen Brustlin, Inc. January 2011. http://www.massport.com/environment/environmental_reporting/Documents/Environmental%20Filings/
2011_LoganRSA_EAEIR.pdf. Last viewed on 12/30/2011.
Gulf of Maine Council. 2008. Salt marshed in the Gulf of Maine - human impacts, habitat restoration, and long-term change analysis. http://www.gulfofmaine.org/saltmarsh/ Last viewed on 5/9/2012.
Kennish, Michael, J. 2001. Coastal salt marsh systems in the U.S.: A review of anthropogenic impacts. Journal of Coastal Research. Vol.17. No.3. 731-748.
Mitsch, William J. and James G. Gosselink. Wetlands. 3rd ed. New York: John Wiley & Sons, Inc., 2000.
Massachusetts Division of Ecological Restoration Active Habitat Restoration Priority Projects. http://www.mass.gov/dfwele/der/der_maps/pp_map.htm. Last viewed on 12/14/2011.
National Oceanic and Atmospheric Administration Restoration Atlas. http://seahorse2.nmfs.noaa.gov/restoration_atlas/src/html/index.html. Last viewed on 12/30/2011.
Nixon, S. W. 1982. The ecology of New England high salt marshes: A community profile. United States Department of the Interior, Washington, D.C., U.S.A.
Reiner, Ed. 2011. Rumney Marshes Restoration Areas. Personal Communication.
University of Massachusetts Amherst Conservation and Assessment System (CAPS). http://www.umasscaps.org/data_maps/maps.html. Last viewed on 5/24/2012. Welsh, Barbara. 1980 Comparative nutrient dynamics of a marsh-mudflat ecosystem. Estuarine Coastal Marine Science. Vol. 10 Issue 2: 143-164. | http://www.mass.gov/envir/massbays/bhha_saltmarsh.htm | 13 |
15 | Many people think that having hearing loss is like listening to a radio set to a low volume — the sound is simply not as loud. Although it is true that certain kinds of hearing loss can make sounds noticeably softer and more difficult to hear, there are in fact different types of hearing loss that can have vastly different effects on how sounds are heard and understood. The different types of hearing loss tend to have different causes, and it appears that having diabetes can contribute to the development of certain types of hearing loss.
The mechanics of hearing
Hearing is a process in which the ear is only the beginning of the story. The chain of events starts when sound enters the ear canal and causes the eardrum to vibrate. The vibrations set in motion the three tiny bones that form a chain in the middle ear space that connects the eardrum to the cochlea — a hollow structure that is coiled in the shape of a snail’s shell, containing three tubes filled with fluid. The last bone in the middle ear chain is connected to a membrane covering a small opening called the oval window at one end of the cochlea, and the vibrations of this membrane cause waves in the fluids inside the cochlea. This, in turn, causes movement of microscopic structures called hair cells, which are present in one of the tubes in the cochlea. The movement of these tiny hair cells creates an electrical signal that is sent to the hearing nerve, which connects the cochlea to the brain stem. The electrical signal travels up the brain stem and through a system of nerve pathways before arriving at specialized auditory centers of the brain where the message is finally processed. Amazingly, this entire chain of events takes only tiny fractions of a second. (See “A Look Inside the Ear” for more information about the mechanics of the ear.)
Types of hearing loss
Damage can occur anywhere along the hearing pathway. The location of the damage determines the type of hearing loss that occurs.
Conductive hearing loss (outer and middle ear). Trauma to the structures of the ear that physically transmit sound, such as the eardrum and the bones in the middle ear, can result in conductive hearing loss, which reduces the ear’s ability to physically conduct sound vibrations. The eardrum can be damaged by chronic infection, trauma resulting from pressure changes in the ear (such as those that occur in deep-sea diving), or blunt force to the ear or head. The tiny bones in the middle ear also can be damaged by blunt force. A condition called otosclerosis, which involves abnormal growth of bone in the middle ear, can reduce the strength of the sound vibrations that are transmitted into the cochlea, thereby reducing the volume at which sounds are heard.
Conductive hearing loss causes a reduction in the overall volume of sounds, but if speech can be made loud enough — by means of a hearing aid or the speaker talking louder, for instance — it can most often be understood. In many cases, areas of the ear involved in conductive hearing loss may be treated with medicines or repaired with surgery.
Sensorineural hearing loss and central processing disorders (inner ear and central hearing pathway). Damage to the inner ear or to structures along the nerve pathway is called sensorineural hearing loss because it involves either the delicate sensory hair cells in the cochlea or the hearing nerve, and sometimes both. When the nerve pathway from the ear to the brain is damaged, this is usually referred to as a central processing disorder. Unlike people with conductive hearing loss, those with sensorineural hearing loss or processing disorders may have difficulty understanding speech even when it is amplified. In fact, too high a volume can result in distortion of the speech, causing an unpleasant sound and making it even more difficult to understand. | http://www.diabetesselfmanagement.com/Articles/General-Diabetes-And-Health-Issues/the-ears-have-it/ | 13 |
14 | Invasion of Burma
Contributor: C. Peter Chen
Burma, isolated from the rest of the world with mountainous ranges on her western, northern, and eastern borders, was a British colony with a degree of autonomy. With the pressure from Japan, British armed Burma with some British and Indian troops and obsolete aircraft so that there would be a small buffer between Japan and India, crown jewel of Britain's Asiatic empire. United States also aimed to help Burma as a direct result of Japanese pressure, but the reason was much different than that of the British; the United States looked to maintain Burmese outside Japanese control so that supply lines into China would remain open. The supplies traveled into China via the Burma Road, a treacherous gravel road that connected Kunming, China with Lace, Burma that opened in 1938. Britain and United States' worries about Burma were not unfounded, as Japan did look to incorporate Burma into her borders. Beyond the wish to cut off China's supply lines, a Japanese-occupied Burma would also provide Japan added security from any potential flanking strikes from the west against the southward expansion that was about to take place.
The Invasion Began
11 Dec 1941
On 11 Dec 1941, only days after Japan's declaration of war against Britain, Japanese aircraft struck airfields at Tavoy, south of Rangoon. On the next day, small units of Japanese troops infiltrated into Brumese borders and engaged in skirmishes against British and Burmese troops. On the same day, a Flying Tigers squadron transferred from China to Rangoon to reinforce against the upcoming invasion.
Under the banner of liberating Burma from western imperialism, the Japanese 15th Army of the Southern Expeditionary Army under the command of Shojiro Iida marched across the border in force from Siam. Airfields at Tavoy and Mergui fell quickly, removing the whatever little threat the obsolete British aircraft posed and preventing Allied reinforcements from the air.
16 Dec 1941
As the invasion had gotten underway, the United States recognized that she must assist British troops in the region. Brigadier General John Magruder, head of the American Military Mission to China, approached Chinese leader Chiang Kaishek for his permission to transfer ammunition aboard the transport Tulsa, currently docked in Rangoon, to the British troops. The goods were originally destined for the Chinese, but Magruder, arguing on behalf of Washington, expressed that the British troops be given priority or the Burma Road might fall under Japanese control, therefore making future supply runs impossible. Before Chiang responded, however, senior American officer in Rangoon Lieutenant Colonel Joseph Twitty advised the government in Rangoon to impound the American ship, while maintaining United States' innocent front. Chiang protested fiercely, noting it as an "illegal confistication". Chiang's representative in Rangoon, General Yu Feipeng, attempted to negotiate for a compromise, but Chiang's attitude was more drastic. On 25 Dec, Chiang announced that he would allow all lend-lease supplies to go to the British in Burma, but all Chinese troops in Burma would be withdrew back into China, and the British-Chinese alliance was to end. For days, Magruder worked with Chiang, and was finally able to secure Chiang's agreement to share the supplies with the British, but as a compromise, Magruder also had to give in to Chiang's demands that Twitty be removed from his position.
This incident, later labeled as the Tulsa Incident, exemplified the difficulties that Chiang's stern personality imposed on the relationship between China, Britain, and the United States.
The Battle of Sittang Bridge
22-31 Jan 1942
In Jan and Feb 1942, the Indian 17th Division under the command of British Major General John Smyth fought a campaign to slow the Japanese advance near the Sittang River. The Japanese 55th Division attacked from Rahaeng, Siam across the Kawkareik Pass on 22 Jan 1942, and over the next nine days pushed the Smyth's troops to the Sittang Bridge, where they were enveloped and crushed. "The Allied defense was a disaster", said military historian Nathan Prefer. "Two understrength Japanese infantry divisions, the 33d and 55th, enjoyed victory after victory over Indian, British, and Burmese troops who were undertrained, inadequately prepared for jungle warfare, and completely dependent upon motor transport for all supply."
The Battle of Rangoon
Rangoon was first attacked first by air; the few Royal Air Force and American Flying Tigers aircraft defended its air space effectively initially, but their numbers waned under constant pressure. Japanese troops appeared at Rangoon's doorsteps toward the end of Feb 1942. Magruder gathered all the trucks he could to send as much lend-lease supplies north into China as possible, and whatever could not be shipped out be given to the British, which included 300 Bren guns, 3 million rounds of ammunition, 1,000 machine guns with 180,000 rounds of ammunition, 260 jeeps, 683 trucks, and 100 field telephones. Nevertheless, he was still forced to destroy more than 900 trucks, 5,000 tires, 1,000 blankets and sheets, and more than a ton of miscellaneous items, all to prevent Japanese capture.
As Japanese troops approached Rangoon, two Chinese Armies, the 5th and the 6th, marched south from China on 1 Mar 1942 to assist. The Chinese armies totalled six divisions, though half of them were understrength and most men of the 6th Army were undertrained green soldiers. Cooperation between the Chinese and the British were poor, though the Chinese regarded Americans such as General Joseph Stilwell in the Chinese temporary war time capital of Chungking rather highly.
Outside Rangoon, the British 7th Armored Brigade attempted to counterattack the Japanese troops marching from the direction of the Sittang River, but failed. On 6 Mar, Japanese troops reached the city, and the final evacuation order was given by British officers on the next day. Retreating troops demolished the port facilities to prevent Japanese use. Whatever aircraft remained of the RAF and the Flying Tigers relocated to Magwe in the Irrawaddy Valley south of Mandalay.
Battle of Tachiao
18 Mar 1942
On 8 Mar 1942, the 200th Division of the Chinese 5th Army began arriving in Taungoo, Burma to take over defense positions from the British. At dawn on 18 Mar, about 200 Japanese reconnaissance troops of 143rd Regiment of Japanese 55th Division, on motorcycles, reached a bridge near Pyu and were ambushed by the Chinese; 30 Japanese were killed, and the Chinese captured 20 rifles, 2 light machine guns, and 19 motorcycles. After sundown, expecting a Japanese counterattack, the Chinese fell back to Oktwin a few kilometers to the south. Pyu was captured by the Japanese on the following day.
Battle of Oktwin
20-23 Mar 1942
The Japanese 143rd Regiment and a cavalry formation of the Japanese 55th Division attacked defensive positions north of the Kan River in Burma manned by troops of the Cavalry Regiment of the Chinese 5th Army. The Chinese fell back toward Oktwin. At dawn on 22 Mar, 122nd Regiment of the Japanese 55th Division attacked outposts manned by a battalion of the Chinese 200th Division, but made little progress. After two days of heavy fighting, the Chinese fell back toward Taungoo, Burma after nightfall on 23 Mar.
Battle of Taungoo
24-30 Mar 1942
Taungoo, an important crossroads city in central Burma, housed the headquarters of Major General Dai Anlan's Chinese 200th Division. The city was attacked by the Japanese 112th Regiment on 24 Mar, quickly surrounding the city on three sides. At 0800 hours on 25 Mar, the main offensive was launched on the city, attempting to push the Chinese defense toward the Sittang River. The Chinese held on to their positions, forcing the Japanese to engage in brutal house-to-house fighting, which took away the Japanese firepower superiority. A counteroffensive launched by the Chinese at 2200 hours, however, failed to regain lost territory. On the next day, the Japanese also failed to penetrate Chinese lines, and later in the day the Chinese, too, repeated the previous day's performance with a failed counterattack which suffered heavy casualties. On 27 and 28 Mar, Japanese aircraft and artillery bombarded the Chinese positions to pave way for an attack by the newly arrived Reconnaissance Regiment of the Japanese 56th Division. On the following day, the Japanese penetrated into the northwestern section of the city in the morning, and by noon the headquarters of the Chinese 200th Division was seriously threatened. In the afternoon, Dai gave the order to retreat after nightfall. The Chinese 200th Division established a new defensive position at Yedashe to the north, joined by the New 22nd Division. Japanese troops would attack this new position on 5 Apr and overcome it by 8 Apr.
Battle of Yenangyaung
11-19 Apr 1942
On 11 Apr, Japanese 33rd Division attacked the Indian 48th Brigade at the oil fields at Yenangyaung, using captured British tanks to support the assault. The situation at first waved back and forth, then General William Slim's two divisions who arrived in response became cut off, leading to British General Harold Alexander requesting American Lieutenant General Joseph Stilwell in China for reinforcements to the Yenangyaung region. On 16 Apr, nearly 7,000 British troops were encircled by equal number of Japanese troops. General Sun Liren arrived with the 113th Regiment of the Chinese 38th Division, 1,121-strong, on 17 Apr. Sun arrived without artillery or tank support, but that deficiency was quickly augmented by the acquisition of Brigadier Anstice's British 7th Armored Brigade. The Chinese attacked southward, while Major General Bruce Scott led the British 1st Burma Division against Pin Chaung. On 19 Apr, the Chinese 38th Division took control of Twingon outside of Yenangyaung, then moved into Yenangyaung itself, but even with the arrival of the 1st Burma Division at Yenangyaung the position could not be defended. The Allied forces withdrew 40 miles to the north. Although Yenangyaung still fell under Japanese control at the end, nearly 7,000 British troops were saved from capture or destruction.
The British Withdraw
7 Mar-26 May 1942
General Alexander and Slim led the remaining forces north through the jungles toward Mandalay, slowing down the Japanese as much as they could. Supply became a critical issue after the fall of Rangoon and its port facilities. In Tokyo, it was decided that Burma was to be rid of all Allied troops. An additional regiment was assigned as reinforcement to the Japanese 33rd Division to bring it up to full strength. Soon after, two additional infantry divisions, the 18th and 56th, arrived in the theater, further bolstering Japanese numbers. The reinforcements arrived to the area undetected by Allied intelligence. Fresh Japanese troops moved north in three separate columns, one through the Irrawaddy Valley, another along the Rangoon-Mandalay Road in the Sittang Valley, and the third marched from Taunggyi in the east for Lashio. Chinese troops attempted to delay Japanese advances but failed; most of them fell back across the Chinese border almost immediately.
Alexander and Slim successfully retreated across the Indian border on 26 May 1942. Along the way, they destroyed precious oilfields so that they could not be used by the Japanese. As the British crossed into India, Japanese forces captured the entire country of Burma, including the important airfields in Myitkyina near the Chinese border.
Some time during the conquest of Burma, the Japanese set up a comfort women system similar to the systems seen in Korea and China. When the combined American and Chinese forces later retook Myitkyina in Aug 1944, 3,200 women were known to be retreating with the retreating Japanese forces. 2,800 of the women were Koreans who were forced to be relocated from their home country to serve the Japanese troops as prostitutes, but there were also many Burmese women who volunteered in the belief that the Japanese were there to liberate their country from western imperialism. Some Chinese women were seen in the ranks as well. The goal of such a system was to prevent the Japanese soldiers from raping Burmese women, and to prevent the spreading of venereal diseases.
Conclusion of the Campaign
"I claim we got a hell of a beating", recalled Stilwell. "We got run out of Burma and it is embarrassing as hell." With Burma under Japanese control, the blockade on China was complete, but that was but a symptom of the real underlying issue: the conflicting goals of the three Allied nations involved in Burma. To Britain, Burma was nothing but a buffer between Japanese troops and India. To China, Burma was a sideshow of the Sino-Chinese War, though important in that it provided an important supply line. To the United States, Burma was the key to keep China fighting in order to tie down the countless number of Japanese soldiers in China so that they could not be re-deployed in the South Pacific. Meanwhile, caught between the politics of the three Allied nations and the Japanese invader, the Burmese people found that none of the warring powers willing to listen to their sentiments.
Sources: BBC, the Pacific Campaign, Vinegar Joe's War, US Army Center of Military History, Wikipedia.
Invasion of Burma Timeline
|12 Dec 1941||Churchill placed the defence of Burma under Wavell's command, promising four fighter and six bomber squadrons and matérial reinforcements, together with the 18th Division and what remained of 17th Indian Division (since two of its Brigades had been diverted to Singapore). On the same day, the 3rd Squadron of the American Volunteer Group was transferred to Rangoon, Burma.|
|14 Dec 1941||A battalion from the Japanese 143rd Infantry Regiment occupied Victoria Point, Burma on the Kra River near the Thai-Burmese border.|
|22 Dec 1941||The Japanese 55th Division, commanded by Lieutenant General Takeuchi Yutaka, assembled at Bangkok, Thailand and was issued orders for it to cross the Thai-Burma frontier and capture Moulmein, which happened to be held by the Headquarters of 17th Indian Division.|
|23 Dec 1941||54 Japanese bombers escorted by 24 fighters attacked Rangoon, Burma in the late morning, killing 1,250; of those who became wounded as the result of this raid, 600 died.|
|28 Dec 1941||Lieutenant-General Thomas Hutton assumed command of Burma army. A competent and efficient Staff Officer (he had been responsible for the great expansion of the Indian army), he had not actually commanded troops for twenty years. Across the border in Thailand, Japanese Colonel Keiji Suzuki announced the disbandment of the Minami Kikan (Burmese armed pro-Japanese nationalists) organization, which would be replaced by the formation of a Burma Independence Army (BIA), to accompany the Invasion force.|
|29 Dec 1941||Japanese bombers struck Rangoon, Burma, destroying the railway station and dock facilities.|
|14 Jan 1942||Japanese forces advanced into Burma.|
|16 Jan 1942||The first clash between Japanese and British forces within Burma occurred when a column of the 3rd Battalion of the Japanese 112th Infantry Regiment was engaged by the British 6th Burma Rifles (plus two companies of the 3rd Burma Rifles and elements of the Kohine battalion BFF) at the town of Tavoy (population 30,000 and strategically important as it was the start of a metal road to Rangoon). By the 18th the Japanese had taken the town, having lost 23 dead and 40 wounded, but the morale of the defenders had been badly damaged and the Japanese column was able to move on to Mergui without serious opposition.|
|19 Jan 1942||Japanese troops captured the airfield at Tavoy (now Dawei), Burma.|
|20 Jan 1942||The Japanese advance guard crossed the border into Burma heading for Moulmein. Kawkareik was defended by 16th Indian Brigade under Brigadier J. K. "Jonah" Jones, but was widely dispersed covering the tracks leading to the border 38 miles away. The Japanese first encountered the 1st/7th Gurkha Rifles (who had only arrived on the previous day) near Myawadi. The Gurkhas were quickly outflanked and forced to withdraw. Within forty-eight hours the rest of 16th Infantry Brigade were forced to follow.|
|23 Jan 1942||The Japanese commenced a determined effort to establish air superiority over Rangoon, Burma. By 29 Jan seventeen Japanese aircraft had been shot down for the loss of two American Volunteer Group and ten Royal Air Force machines, forcing the Japanese temporarily to concede.|
|24 Jan 1942||Japanese aircraft attacked Rangoon, Burma for the second day in a row. From the Thai-Burmese border, Japanese troops marched in multiple columns toward Moulmein, Burma, looking to capture the nearby airfield.|
|25 Jan 1942||Japanese aircraft attacked Rangoon, Burma for the third day in a row. Meanwhile, Archibald Wavell ordered that the airfield at Moulmein, Burma to be defended, which was being threatened by troops of the Japanese 55th infantry Division.|
|26 Jan 1942||Japanese aircraft attacked Rangoon, Burma for the fourth day in a row.|
|30 Jan 1942||Japanese 55th Infantry Division captured the airfield at Moulmein, Burma.|
|31 Jan 1942||Japanese 55th Infantry Division captured the town of Moulmein, Burma one day after the nearby airfield was captured; Burmese 2nd Infantry Brigade (Brigadier Roger Ekin) retreated across the Salween River during the night after having lost 617 men (mostly missing); Archibald Wavell however, unaware of the true situation, was appalled and angry to hear of the ease with which the Japanese had driven Burmese 2nd Infantry Brigade from the town. On the same day, Slim issued a report summarizing the air situation in Burma, noting the Allies had 35 aircraft in the area to defend against about 150 Japanese aircraft; while a few more Allied aircraft were en route for Burma, by mid-Mar 1942 there would be 400 operational Japanese aircraft in this theater of war.|
|3 Feb 1942||Burmese 2nd Infantry Brigade and a part of the Indian 17th Division withdrew from Martaban, Burma toward the Bilin River.|
|6 Feb 1942||Wavell, still angry at the loss of Moulmein, Burma, ordered 2nd Burma Brigade to "take back all you have lost". It was too late-the Japanese were already bringing more troops (33rd "White Tigers" Division and the Headquarters of 15th Army) across the frontier. Lieutenant-General Hutton insisted on abandoning Moulmein and taking up new positions on the Salween which would be reinforced by the newly committed 46th Indian Brigade who had been brought down from the Shah States.|
|7 Feb 1942||The Japanese infiltrated across the Salween River in Burma cutting the defenders of Martaban River, 3/7th Gurkhas with a company of the King's Own Yorkshire Light Infantry under command, from the 46th Indian Brigade headquarters base at Thaton. The Gurkha's Commanding Officer, Lieutenant Colonel H. A. Stevenson, knowing that his position was now untenable led a bayonet charge to clear the road block. The subsequent retreat from Martaban (over difficult terrain with no food) of more than 50 miles in two days was a terrible ordeal and a foretaste of things to come.|
|10 Feb 1942||Japanese troops crossed the Salween River in Burma.|
|11 Feb 1942||Having crossed the Salween River at Kuzeik, Burma during the night the Japanese II/215th Infantry regiment engaged the raw and inexperienced 7/10th Baluch who were deployed in a semi-circle with their backs to the river without barbed wire or artillery support. After dark the Japanese launched their attack on the Indian positions and after four hours of bitter hand to hand fighting began to get the upper hand. By dawn organized resistance had effectively ceased. The heroic 7/10th Buluch had suffered 289 killed; with the few survivors making off in small parties.|
|13 Feb 1942||In Burma, the British Commander-in-Chief Lieutenant-General Hutton requested Archibald Wavell to appoint a corps commander to take charge of operations and a liaison team to work with the Chinese. He received no reply as Wavell was incapacitated after suffering a fall.|
|14 Feb 1942||Indian 17th Infantry Division was ordered to defend against the Japanese advance toward Rangoon, Burma at the Bilin River.|
|15 Feb 1942||Japanese troops penetrated Indian 17th Infantry Division positions on the Bilin River north of Rangoon, Burma.|
|17 Feb 1942||Japanese troops crossed the Bilin River north of Rangoon, Burma and began to encircle the Indian 17th Infantry Division.|
|18 Feb 1942||After three days of confused fighting along the Bilin in Burma, Major General "Jackie" Smyth learned that he was threatened with being outflanked to the south by the Japanese 143rd Regiment. He committed his last reserves, 4/12th Frontier Force Regiment who fought a stiff action on 16th Indian Brigade's left but ultimately failed to dislodge the Japanese.|
|19 Feb 1942||Mandalay, Burma came under aerial attack for the first time. Meanwhile, the Japanese 143rd Regiment, having crossed the Bilin Estuary arrived at Taungzon, effectively bypassing the British and Indian positions along the Bilin River; Lieutenant General Hutton had no option but to permit a withdrawal to the Sittang.|
|20 Feb 1942||The Japanese attacked the positions of 16th and 46th Indian Brigades at Kyaikto, Burma, delaying the retreat from the Balin to the Sittang Bridge for forty-eight hours, and causing total confusion among the withdrawing columns. To make matters worse the Indians came under friendly air attack from RAF and AVG aircraft. In addition most of the Divisional Headquarters' radio equipment was lost in the confusion. In Rangoon, Hutton's implementation of the second part of the evacuate Europeans caused wide-spread panic with much looting by drunken natives, and the emptying of the cities goals of lunatics and criminals.|
|21 Feb 1942||The 2nd Burma Frontier Force, who had been placed north of the Kyaikto track to warn against outflanking, were heavily engaged by the Japanese 215th Regiment and forced to withdraw north-west, crossing the Sittang River by country boats, and proceeding to Pegu. No report of this contact ever reached the divisional commander "Jackie" Smyth who was still hearing rumours of a threatened parachute landing to the west. To the south, British 7th Armored Brigade arrived at Rangoon by sea from Egypt.|
|22 Feb 1942||During the early hours, the Sittang Bridge in Burma became blocked when a lorry got stuck across the carriageway. With the Japanese closing in on Pagoda and Buddha Hills overlooking the important crossing, the British divisional commander "Jackie" Smyth had to accept that the bridge must be destroyed, even though a large part of his force was still on the east bank. Lieutenant-General Hutton was informed that he was to be replaced but was to remain in Burma as Alexander's Chief of Staff, a most awkward position which he endured until he was replaced at his own request by Major-General John Winter before returning to India in early April.|
|23 Feb 1942||The Sittang railway bridge in Burma was blown up to prevent its capture by the Japanese, even though most of General Smyth's command was still on the east bank. Smyth salvaged from the catastrophe 3,484 infantry, 1,420 rifles, 56 Bren guns and 62 Thompson submachine guns. Nearly 5,000 men, 6,000 weapons and everything else was lost. Despite many men making it back across the river without their weapons, 17th Indian was now a spent force. It would take the Japanese a fortnight to bring up bridging equipment which permitted the Europeans in Rangoon to make their escape from the doomed city.|
|28 Feb 1942||General Archibald Wavell, who believed Rangoon, Burma must be held, relieved Thomas Hutton for planning an evacuation.|
|2 Mar 1942||Japanese 33rd and 55th Infantry Divisions crossed Sittang River at Kunzeik and Donzayit, Burma, forcing the British 2nd Battalion Royal Tank Regiment to fall back 20 miles as the Japanese troops captured the village of Waw.|
|3 Mar 1942||Japanese troops forced Indian 17th Infantry Division out of Payagyi, Burma.|
|4 Mar 1942||In Burma, Japanese troops enveloped Chinese troops at Toungoo while British 7th Queen's Own Hussars regiment clashed with Japanese troops at Pegu.|
|6 Mar 1942||Anglo-Indian and Japanese troops clashed at various roadblocks near Rangoon, Burma.|
|7 Mar 1942||£11,000,000 worth of oil installations of Burmah Oil Company in southern Burma near Rangoon were destroyed as British retreated from the city, preventing Japanese capture; this destruction would result in 20 years of High Court litigation after the war. Also destroyed were 972 unassembled Lend-Lease trucks and 5,000 tires. From Rangoon, 800 civilians departed aboard transports for Calcutta, India. The Anglo-Indian troops in the Rangoon region were held up by a Japanese roadblock at Taukkyan, which was assaulted repeatedly without success.|
|8 Mar 1942||200th Division of the Chinese 5th Army arrived at Taungoo, Burma to assist the British defense.|
|9 Mar 1942||Japanese troops entered undefended Rangoon, Burma, abandoned by British troops two days prior.|
|10 Mar 1942||Japanese 55th Infantry Division began pursuing the retreating British troops from Rangoon, Burma.|
|15 Mar 1942||Harold Alexander admitted to Joseph Stilwell that the British had only 4,000 well-equipped fighting men in Burma.|
|18 Mar 1942||Chinese troops ambushed 200 Japanese reconnaissance troops near Pyu in Battle of Tachiao, killing 30. Meanwhile, aircraft of the 1st American Volunteer Group "Flying Tigers" bombed the Japanese airfield at Moulmein, claiming 16 Japanese aircraft destroyed on the ground. Of the Burmese coast, troops from India reinforced the garrison on Akyab Island.|
|19 Mar 1942||Japanese troops captured Pyu, Burma.|
|20 Mar 1942||Japanese 143rd Regiment and a cavalry formation of the Japanese 55th Division attacked troops the Cavalry Regiment of the Chinese 5th Army north of the Kan River in Burma.|
|21 Mar 1942||151 Japanese bombers attacked the British airfield at Magwe in northern Burma, the operating base of the Chinese Air Force 1st American Volunteer Group "Flying Tigers"; 15 Sino-American aircraft were destroyed at the cost of 2 Japanese aircraft. Meanwhile, at Oktwin, forward elements of Japanese 55th Division engaged Chinese troops.|
|22 Mar 1942||American and British airmen abandoned the airfield in Magwe in northern Burma. To the southeast, at dawn, troops of the 600th Regiment of the Chinese 200th ambushed troops of the 122nd Regiment of the Japanese 55th Division near Oktwin, Burma.|
|23 Mar 1942||Chinese troops held the Japanese attacks in check near Oktwin, Burma, but withdrew toward Taungoo after sundown.|
|24 Mar 1942||Japanese 112th Regiment attacked Taungoo, Burma, overcoming the disorganized Chinese outer defenses. Meanwhile, Japanese 143rd Regiment flanked the Chinese defenses and captured the airfield and rail station 6 miles north of the city. Taungoo would be surrounded on three sides by the end of the day.|
|25 Mar 1942||The main Japanese offensive against Taungoo, Burma began at 0800 hours, striking northern, western, and southern sides of the city nearly simultaneously. Fierce house-to-house fighting would continue through the night.|
|26 Mar 1942||Chinese and Japanese troops continued to engage in house-to-house fighting in Taungoo, Burma, with heavy losses on both sides.|
|27 Mar 1942||Japanese aircraft and artillery bombarded Chinese positions at Taungoo, Burma.|
|28 Mar 1942||A fresh regiment of the Japanese 56th Division attacked Chinese-defended city of Taungoo, Burma.|
|29 Mar 1942||Japanese penetrated the Chinese defenses at Taungoo, Burma and threatened to trap the Chinese 200th Division in the city. General Dai Anlan issued the order to retreat from the city after sundown, falling back northward. During the withdraw, the Chinese failed to destroy the bridge over the Sittang River. To the west, Japanese captured a main road near Shwedaung, disrupting the Allied withdraw; an Anglo-Indian attack from the south failed to break the roadblock.|
|30 Mar 1942||Japanese 55th Division attacked Taungoo, Burma at dawn, capturing it without resistance as the Chinese 200th Division had evacuated the city overnight. To the west, British 7th Armoured Brigade broke through the Japanese roadblock at Shwedaung, but suffered tank destroyed on the nearby bridge over the Irrawaddy River, blocking traffic. Shortly after, Japanese-sponsored Burma National Army attacked the British troops while the British attempted to maneuver around the disabled tank, killing 350 with as many losses.|
|2 Apr 1942||Japanese troops drove Indian 17th Division out of Prome, Burma.|
|3 Apr 1942||Six B-17 bombers of the US 10th Air Force based in Asansol, India attacked Rangoon, Burma, setting three warehouses on fire; one aircraft was lost in this attack.|
|4 Apr 1942||Japanese aircraft bombed areas of Mandalay, Burma, killing more than 2,000, most of whom were civilians.|
|5 Apr 1942||Japanese and Chinese troops clashed at Yedashe in central Burma.|
|6 Apr 1942||Japanese troops captured Mandalay, Burma. Off Akyab on the western coast of Burma, Japanese aircraft sank Indian sloop HMIS Indus.|
|8 Apr 1942||Japanese troops overran Chinese 200th Division and New 22nd Division defensive positions at Yedashe, Burma.|
|10 Apr 1942||Japanese and Chinese troops clashed at Szuwa River, Burma.|
|11 Apr 1942||In Burma, British troops formed a new defensive line, Minhia-Taungdwingyi-Pyinmana, on the Irrawaddy River. After dark, the Japanese reached this line, launching a first attack on the Indian 48th Brigade at Kokkogwa.|
|12 Apr 1942||Japanese attacks on Minhia, Thadodan, and Alebo on the Minhia-Taungdwingyi-Pyinmana defensive line in Burma were stopped by Anglo-Indian troops including the British 2nd Royal Tank Regiment. British tankers reported seeing captured British tanks pressed into Japanese service.|
|13 Apr 1942||Japanese troops continued to assault the Minhia-Taungdwingyi-Pyinmana defensive line along the Irrawaddy River in Burma without success. To the northwest, troops of Japanese 56th Infantry Division captured Mauchi from troops of Chinese 6th Army and the nearby tungsten mines.|
|15 Apr 1942||As Japanese troops began to push through the British Minhia-Taungdwingyi-Pyinmana defensive line along the Irrawaddy River in Burma and approached the oil-producing region of Yenangyaung, William Slim gave the order to destroy 1,000,000 gallons of crude oil to prevent Japanese capture while the British 7th Armoured Division pushed through Japanese road blocks to prepare men on the line to fall back.|
|16 Apr 1942||Japanese troops decisively defeated the 1st Burma Division near Yenangyaung, Burma.|
|17 Apr 1942||William Slim launched a failed counterattack with the Indian 17th Division near Yenangyaung, Burma; he had wanted the counterattack to open up Japanese lines, to meet with troops of the 113th Regiment of Chinese 38th Division fighting to relieve Yenangyaung, and to allow the remnants of the 1st Burma Division to return to the main Allied lines. To the east, Japanese 56th Infantry Division and Chinese troops clashed at Bawlake and Pyinmana, Burma.|
|18 Apr 1942||Although the 113th Regiment of the Chinese 38th Division under General Sun Liren and the British 7th Armoured Brigade had reached near Yenangyaung, Burma, they could not prevent the Japanese troops from capturing the city; the final elements of British troops fleeing out of the city destroyed the power station to prevent Japanese use.|
|19 Apr 1942||The 113th Regiment of the Chinese 38th Division under General Sun Liren captured Twingon, Burma then repulsed a Japanese counterattack that saw heavy casualties on both sides. To the east, Japanese 55th Infantry Division captured Pyinmana.|
|20 Apr 1942||Japanese troops captured Taunggyi, Burma, capital of the southern Shan States, along with its large gasoline store. In central Burma, troops of the Japanese 56th Division pushed Chinese troops out of Loikaw, while troops of the Japanese 18th Division clashed with Chinese troops at Kyidaunggan.|
|21 Apr 1942||Japanese 18th Division captured Kyidaunggan, Burma from Chinese troops.|
|22 Apr 1942||British troops fell back to Meiktila, Burma while Indian 17th Infantry Division fell back from Taungdwingyi to Mahlaing to protect Mandalay.|
|23 Apr 1942||Chinese mercenary troops under Allied command attacked Taunggyi, Burma while Japanese 56th Division captured Loilem.|
|24 Apr 1942||Japanese 18th Infantry Division captured Yamethin, Burma.|
|25 Apr 1942||Alexander, Slim, and Stilwell met at Kyaukse, Burma, 25 miles south of Mandalay. It was decided that all Allied troops were to be pulled out of Burma, but Slim demanded that no British nor Indian units would be withdrawn to China even if the Chinese border was closer to that of India's. Meanwhile, Japanese and Chinese troops clashed at Loilem, central Burma.|
|26 Apr 1942||In Burma, the Indian 17th Division moved from Mahlaing to Meiktila, 20 miles to the south, to assist the Chinese 200th Division in forming a line of defense against the Japanese attack on Mandalay.|
|28 Apr 1942||Troops of the Chinese 28th Division arrived at Lashio in northern Burma. To the west, the Indian 17th Division crossed the Irrawaddy River at Sameikkon, Burma on its retreat toward India; Chinese 38th Division and British 7th Armoured Brigade formed a line between Sagaing and Ondaw to guard the retreat.|
|29 Apr 1942||Japanese 18th Infantry Division captured Kyaukse, Burma just south of Mandalay. To the west, Japanese 33rd Infantry Division pursued the Anglo-Indian withdraw across the Irrawaddy River toward India. To the north, 100 kilometers south of the border with China, Japanese 56th Infantry Division captured Lashio midday.|
|30 Apr 1942||In western Burma, Chinese 38th Division began to move westward to join the Anglo-Indian troops already en route for India. After the tanks of the British 7th Armoured Division had successfully crossed the Ava Bridge over the Irrawaddy River, Chinese troops blew up the bridge to slow the Japanese pursuit.|
|1 May 1942||Japanese 18th Infantry Division captured Mandalay, Burma. 300 kilometers the northeast, Japanese and Chinese troops clashed at Hsenwi. 50 miles west of Mandalay, Japanese troops blocked the British retreat at Monywa on the Chindwin River and then attacked from the rear by surprise, capturing the headquarters of the 1st Burma Division.|
|2 May 1942||1st Burma Division unsuccessfully attacked Japanese 33rd Infantry Division at Monywa, Burma on the Chindwin River.|
|3 May 1942||Having fought off the attack by the 1st Burma Division at Monywa, Burma, Japanese 33rd Infantry Division went on the offensive pushing 1st Burma Division back toward Alon.|
|4 May 1942||Japanese troops captured Bhamo, Burma. Off the Burmese coast, with increasing malaria cases affecting the garrison's morale, Akyab Island was abandoned.|
|8 May 1942||Japanese troops captured Myitkyina, Burma.|
|9 May 1942||By this date, most troops of the Burma Corps had withdrew west of the Chindwin River.|
|10 May 1942||The Thai Phayap Army invaded Shan State, Burma. In western Burma, Gurkha units, rearguard to the British general retreat, held off another Japanese assault throughout the afternoon; they also withdrew westwards after sundown.|
|12 May 1942||The monsoon began in Burma, slowing the retreat of Allied troops into India, but it also stopped Japanese attempts to attack the retreating columns from the air.|
|15 May 1942||The retreating Allied columns reached Assam in northeastern India.|
|18 May 1942||Most of the retreating troops of BURCORPS reached India.|
|20 May 1942||Japanese troops completed the conquest of Burma. All Allied troops previously under the command of William Slim (who was transferred to Indian XV Corps) were reassigned to the British IV Corps, thus dissolving the Burma Corps.|
|23 May 1942||Japanese and Chinese troops clashed along the Hsipaw-Mogok road in northern Burma.|
|25 May 1942||Chinese 38th Infantry Division began to cross the border from Burma into India.|
|27 May 1942||Thai forces captured Kengtung, Burma.|
Visitor Submitted Comments
All visitor submitted comments are opinions of those making the submissions and do not reflect views of WW2DB.
» Alexander, Harold
» Du, Yuming
» Slim, William
» Sun, Liren
Advertise on ww2db.com
- » 725 biographies
- » 302 events
- » 26812 timeline entries
- » 663 ships
- » 300 aircraft models
- » 163 vehicle models
- » 254 weapon models
- » 64 historical documents
- » 282 book reviews
- » 209 maps
- » 16045 photos, 1464 in color
Fleet Admiral Chester W. Nimitz, 16 Mar 1945 | http://ww2db.com/battle_spec.php?battle_id=59 | 13 |
26 | Ask any fourth grader who the first Americans were, and he will tell you that the Indians who came across the Bering Strait were. These same Indians didn't stop in Arizona, however. Small groups of them kept on wandering south in search of food and by the year 2000 BC, they had settled all over Central America. In Managua, Nicaragua, there are footprints which were formed, according to native legend, by people fleeing to Lake Managua from a volcanic eruption almost 10,000 years ago. These first explorers found that the soil which had been formed by the volcanic ash and minerals was perfect for growing food like beans and maize. They no longer had to wander in search of food to survive and so they began to build permanent civilizations. In Nicaragua, only small agricultural communities developed in the East and their civilizations never become as advanced as their neighbors who formed the mighty Aztec Empire and the Mayan Empire. In spite of the fact that they did not leave behind any large ruins as proof of their development, the indigenous Nicaraguans were very skilled craftsmen who left behind intricate stone carvings, pottery, and gold jewelry.
See a picture of indiginous Indian rock carvings.
But history changed forever when the coast of the Americas was sighted by Christopher Columbus in 1502. Twenty-two years after that first sighting by a member of the European continent, Nicaragua was settled by a Spanish expedition led by Francisco Hernandez de Cordoba who claimed the region for his native country. In 1524, the settlements of Leon and Granada were founded by Hernandez and Nicaragua became an official part of the Spanish Empire. These two cities still exist today, by the way, and they are important centers of the Nicaraguan culture and economy.
Spanish rule was imposed on all the Indians who had not died from the Old World diseases which the "conquistadors" had brought with them or who had not been carried away as slaves. It is estimated that in western Nicaragua alone, what had been a population of over one million Indians was crushed to a few tens of thousands by the end of the Spanish conquest. Also, historical research indicates that as many as half a million native Nicaraguans may have been exported as slaves to Panama and Peru. Most of these unfortunate souls died en-route to their destination or after a year or two in slavery as a result of the deplorable conditions. However, although vast quantities were annihilated, some Indians did survive the onslaught.
This drastic reduction in population was not the only major change that the Spanish brought to the region. Before the conquest, labor-intensive agriculture was commonplace because the Indians grew corn, beans, peppers, and squash as assigned to them by their caciques. (Caciques were the Indian chiefs whom the Spanish manipulated by way of bribes and alliances to extract gold and slaves.) Although the common Indian had to give a certain portion of their crops to their cacique as a kind of tax, they could keep the rest to eat or sell in the market. However, because of the drastic reduction in population, there were not enough farmers left to till the earth and much of the agricultural land reverted to jungle and became unusable to the future inhabitants. Also, the Spanish forced the people to produce goods such as gold, silver, timber, and cattle which could be exported to Spain or traded with the other colonies instead of the basic bread-basket foods which they had been growing for centuries. The Indians, even though they were far more numerous than their Spanish masters, provided the labor to fund this export- based economy. This in itself was not so terribly bad; what was terrible about this situation was the fact that most of the wealth which was produced flowed into the hands of the tiny white minority and very little trickled down into the hands of the common people.
As the conquistadors tried to impose their religion, language, and customs on the conquered people, many cultural aspects where altered drastically as well. However, nowhere was the transition complete, and many items and cities retain their native Nicaraguan names and Indian customs are still in evidence. By and large, however, Nicaragua was hispanicized and Spanish became the language of the people and Catholicism became the almost universal religion. All the new cities such as Granada and Leon were built following the typical Spanish system of plazas, city markets, cathedrals, and public buildings.
The conquest also revolutionized the social system by establishing brand new class patterns. The pre-Columbian societies of Central America operated on the basis of a hierarchical system, and it was this fact which facilitated the superimposition of the Spanish on the system. What changed, however, was the fact that social classes came to be determined by race. Two highly unequal classes emerged with the Spanish as the obvious superiors. Those who were Spanish by birth or descent became the ruling class and everybody else became the poor and seemingly worthless lower class. Within this lower class, a system of classification developed based on the amount of Spanish blood flowing through each individual's veins. A group of people called the mestizos, the off-spring of Spanish men and Indian women, were at the top of this sub-system with the pure-blooded, indigenous Nicaraguans at the very bottom.
From 1526 to 1821, Nicaragua was governed by Spain and was considered to be one of her colonies. Pedrarias Davila governed the young colony from 1526 until his death in 1531. A period of intense rivalry and civil war among the Spanish conquerors arose soon after the end of his governorship, and Nicaragua was incorporated into the captaincy-general of Guatemala. However, the administrative power really lay in Spain and the viceroyalty was merely a name in most cases. As a result of the new system of dependency, most of the development and moneys were spent in Guatemala, thus causing resentment and rivalry to grow in the other regions of Central America such as Nicaragua.
See a picture of early Nicaragua.
Colonial Nicaragua enjoyed comparative peace and prosperity, although freebooters like the English navigators Sir Francis Drake and Sir Richard Hawkins continually disrupted that prosperity by raiding and destroying coastal settlements. During the 1700s, the British managed to ally themselves with the Miskito, a Native American group of people intermarried with blacks, and they began to severely challenge Spanish control. For a period during and after the middle of the century the Mosquito Coast was considered a British dependency. However, The Battle of Nicaragua which lasted from 1775 to 1783, the time period of the American Revolution, ended Britain's attempts to win a permanent foothold in Nicaragua.
Unlike the United States, Central America managed to pass from colonial rule into formal independence with almost no violence. Central America merely followed Mexico's lead and broke with Spain in mid-1821. In January of 1822, Central America joined the Mexican empire of Agustin de Iturbide. However, he was "abdicated" in mid-1983 and his short reign ended. Shortly after that, Central America decided it was tired of its relationship with Mexico and all but the province of Chiapas, which chose to remain united with Mexico, declared themselves independent once again. From then until 1838, the region was supposedly unified into a federation called the United Provinces of Central America.
At first, the union seemed like a great idea and everyone was excited about the possibilities. They reasoned that Central America would be politically and economically stronger as one unit instead of five small pieces. However, from the very beginning, powerful forces worked to destroy the fragile relationship. First of all, the resentment that most of the nations held for Guatemala grew even larger when Guatemala received eighteen of the forty-five seats in the congress and therefore dominated policymaking. Second, the Constitution of 1824 declared each state to be "free and independent" in their internal affairs. However, the Constitution also contained contradictory features which supported nationalist and centralist ideas and these ideas tended to hamper the freedom which each country sought in their "internal affairs." Finally, two parties, the Liberals and the Conservatives, began to emerge out of the ruling elite and their rivalry threatened the union. Liberals and Conservatives not only disputed within their own provinces, but also across borders. As a result, meddling in neighbor's affairs has become a common practice of Central American leaders. These three factors worked together to create tension and resurgent civil war. Everything just blew up in 1838 as first Nicaragua and then everybody else split. Several more attempts were made to reunify the countries, but none were ever successful.
Leon went on to became the center of the Liberals (Los Liberales), and Granada became the political center of the Conservatives (Los Conservativos). Faction-based strife began to heat up as the Liberals fought to establish an independent nation and declared Nicaragua an independent republic in 1983. This strife became characteristic of Nicaraguan politics and still continues today in a different form. Even after the declaration, civil strife continued and in 1855 William Walker, an American adventurer with a small band of followers, was hired by the Liberals to head their forces in the opposition of the Conservatives. Walker captured and sacked Granada and then set himself up as president of Nicaragua in 1856 and sought US statehood. However, Walker made a fatal mistake when he seized the property of Cornelius Vanderbilt because Vanderbilt retaliated by backing the Conservatives who forced Walker to leave the country in 1857. Cornelius interest in Nicaragua was due to his Accessory Transit Company which he founded in 1849 to facilitate the California Gold Rush.
See a picture of early Leon.
In 1893, the Liberals brought about a successful revolution which placed their leader Jose Santos Zelaya to power. Zelaya remained president for the next 16 years, ruling as a dictator. He was forced out in 1909, after Adolfo Diaz was elected provisional president. Diaz requested United States military to maintain order after a revolt in 1912, and US marines landed a few years later. According to the Bryan-Chamorro Treaty of 1916, the US paid $3 million dollars to Nicaragua for the right to build a canal across the country from the Atlantic Ocean to the Pacific Ocean, to lease the Great and Little Corn Islands, and to establish a naval base in the Gulf of Fonseca. The agreement was extremely unpopular with many elements and it aroused anti-American guerrilla warfare in Nicaragua as well as protests from other Central American countries. When the American marines left in 1925 rebellions began and the marine force returned a year after its departure. Under American supervision, an election was held in 1928, and General Jose Moncada, a Liberal, was chosen. One Liberal leader, however, Augusto Cesar Sandino, engaged the US forces in guerrilla warfare for many years.. The U.S. Government withdrew the marines in 1933, leaving Anastasio Somoza commander of the National Guard. Somoza purportedly had Sandino assassinated and was elected president in 1937. Thus began the Somoza dynasty which ruled Nicaragua as a dictatorship for the next 43 years.
See a picture of Augusto Sandino and Anastasio Somoza Garcia.
Pearl Harbor was bombed, and on December 9, 1941, Nicaragua entered World War II. In June of1945, it became a charter member of the United Nations. Nicaragua joined the Organization of American States in 1948 and the Organization of Central American States, created to solve common central American problems, in 1951. In 1956, Anastasio Somoza, who had resumed the presidency, was assassinated. He was succeeded by his son, Luis Somoza Debayle, who first served out his father's term and was then elected in his own right. For four years after the end of his term, close associates, instead of the actual Somoza family, held the presidency. Then, in 1967, Anastasio Somoza Debayle, the younger son of the former dictator, was elected president. Debayle was a military-minded autocrat and he repressed his opposition with the aid of the National Guard.
In August 1971 the legislature abrogated the constitution and dissolved itself, and in February 1972, Somoza's Liberal party won in a landslide. In May, Somoza stepped down to the post of chief of the armed forces; political control was assumed by a trio of two Liberals and one Conservative.
The forces of nature struck and devastated Nicaragua on December 23, 1972 when a massive earthquake virtually leveled the city of Managua. The earthquake left 6000 dead and 20,000 injured in its wake. Martial law was declared, and Somoza in effect became chief executive again. Sadly, however, Somoza did not use the international aid which he received to rebuild country in a prudent manner and the opposition to his regime grew even stronger. He formally became president again with his re-election in 1974.
By the late 1970s, the economy of Nicaragua was stagnant and the people were ripe for a revolution. Then in 1978, an editor of the anti-Somoza newspaper La Prensa was assassinated and the people began to blame Somoza. The anti-Somoza guerrilla forces under the leadership of the Sandinista National Liberation Front (FSLN) began to violently oppose the existing military and the country was plunged into a virtual civil war. The United States was so worried that a Communist regime would emerge from the chaos which had taken over Nicaragua that they urged Somoza to resign so that a moderate group could take power. In 1978, the US and the OAS failed in mediation attempts with Nicaragua and the US suspended military aid to Somoza. Somoza did in fact resign on July 17 of 1979 and flew to Miami, Florida and then to Paraguay in exile. In 1980, radicals found him and assassinated him in Paraguay.
See a picture of farmer.
Control of the country was shifted to a junta of five people, one of which was Violeta Chamorro, and this junta ruled Nicaragua from 1980 to 1985. The junta began to lean more and more toward left-wing policies, and Chamorro resigned in disgust and turned her late husband's newspaper into an opposition voice to these policies. Around this time, a group of Sandinista- opponents sprang up and became known as contras. In 1981, the US began to fund the contras in their guerrilla war against the Sandinistas in order to continue the US foreign policy of suppressing communism. Facing enormous economic difficulties, the junta made an agreement with the former USSR for an aid package. Of course the US became even more desperate, fearing that another Cuba was in the making. In 1985, Daniel Ortega, the FSLN's presidential candidate, took office and declared a state of national emergency and suspended civil rights. At this time, what has become known as the Iran-Contra Affair occurred, an operation in which funds were secretly channeled to the contras, directly violating the 1992 Boland Amendment. By 1988, the country was a social and economic disaster zone as a result of the civil war and Hurricane Hugo so President Ortega agreed to the first peace talks with the contras and a temporary truce was achieved in March. In 1990, the moderate Violeta Chamorro became President of Nicaragua as a result of the free elections and she was able to maintain peace in the land throughout her term of office and improve relations with the US.
In 1997, the conservative candidate of the Liberal Party, Arnoldo Aleman Lacayo, was elected over Daniel Ortega, 49 percent to 39 percent, and Arnoldo's party gained a majority in the National Assembly as well. The transfer of power from Chamorro to Aleman was the first peaceful transfer of power from one democratically elected president to another in Nicaragua's history. The new administration has stated that they are committed to further reforms that will ensure sustainable economic growth. These reforms include improving the business climate through the privatization of the few state enterprises that are left, strengthening law enforcement, resolving private property disputes, remaking tax and investment laws as well as the judicial system. In May of 1997, a demobilization accord was reached with the many armed bands that have been operating in remote areas of the country. Reforms of the Tax and Commercial Justice Law have been made to reduce income tax rates, widen the tax base, lower average import tariffs, and increase the tax contribution from consumption. International trade and exchange controls have been vastly reduced inviting more and more trade. The government appears to be succeeding in its effort to improve economic conditions, because the GDP growth rate has reached almost 6%, the highest in Central America, and the IMF has accepted Nicaragua's structural adjustment plan (ESAF) and the Paris Club has renegotiated the payment of over a billion dollars of the national debt.
See a picture of Arnoldo Aleman. | http://library.thinkquest.org/17749/lhistorysummary.html | 13 |
32 | People With Disabilities
People with disabilities are not conditions or diseases. They are individual human beings. See the person who has a disability as a person, not as a disability. For example, a person is not an epileptic but rather a person who has epilepsy. First and foremost they are people. Only secondarily do they have one or more disabling conditions. They prefer to be referred to in print or broadcast media as people with disabilities.
Distinction between Disability and Handicap
The term disability is defined as a physical or mental impairment that substantially limits one or more of a person's major life activities, a record of such impairment, or being regarded as having such an impairment.
This is the same definition used in Sections 503 and 504 of the Rehabilitation Act and the Fair Housing Amendments Act. A disability is a condition caused by an accident, trauma, genetics or disease which may limit a person's mobility, hearing, vision, speech or mental function. Some people with disabilities have one or more disabilities. A handicap is a physical or attitudinal constraint that is imposed upon a person, regardless of whether that person has a disability. Webster's Ninth New Collegiate Dictionary defines handicap as to put at a disadvantage.
For example, some people with disabilities use wheelchairs. Stairs, narrow doorways and curbs are handicaps imposed upon people with disabilities who use wheelchairs.
People with disabilities have all types of disabling conditions, including:
- mobility impairments
- blindness and vision impairments
- deafness and hearing impairments
- speech and language impairments
- mental and learning disabilities
The Americans with Disabilities Act (ADA)
The ADA was signed into law on July 26, 1990. The purpose of the Act is to:
- Provide clear and comprehensive national mandate to end discrimination against individuals with disabilities.
- Provide enforceable standards addressing discrimination against individuals with disabilities.
- Ensure that the federal government plays a central role in enforcing these standards on behalf of individuals with disabilities.
The ADA gives people with disabilities civil rights protection that is like that provided to individuals on the basis of race, sex, national origin and religion. People with disabilities now have a legal alternative for correcting accessibility barriers.
The ADA guarantees equal opportunity for individuals with disabilities in:
- public accommodations
- state and local government services
Reasonable Accommodations in the Work Place
Reasonable accommodations enhance the opportunity for qualified persons with disabilities who may not otherwise be considered for reasons unrelated to actual job requirements to be or remain employed. The purpose of providing reasonable accommodations is to enable employers to hire or retain qualified job candidates regardless of their disability by eliminating barriers in the work place.
According to the Department of Justice government-wide regulations, section 41.53, "A recipient shall make reasonable accommodation to the known physical or mental limitations of an otherwise qualified handicapped applicant or employee unless the recipient can demonstrate that the accommodation would impose an undue hardship on the operation of its program."
Employers are not required to go to outrageous expense or trouble to make accommodations for a disabled employee. The key is whether the requested accommodation is objectively reasonable or not. Reasonable accommodation has to do with employees and must be made, unless they impose a significant difficulty or expense.
No modifications need be undertaken to fulfill the requirement of Title I until a qualified individual with a disability is being hired. Readily achievable, on the other hand, has to do with clients or guests. These modifications must be made before the disabled guest or client ever arrives. They include things such as:
- making curb cuts in sidewalks
- widening doorways and changing door hardware
- installing offset hinges to widen doorways
- installing ramps
- adding raised and Braille symbols at elevators
- rearranging furniture, vending machines, and displays
- removing high pile, low density carpeting
- repositioning shelves and telephones
- installing grab bars in toilet stalls
- installing raised toilet seats
- installing insulation on lavatory pipes under sinks
- installing flashing alarm lights
Inquiries made of an individual about limitations in job performance must be directly related to the prospective or existing position. Accommodations are tailored for a certain job or situation that an individual is hired to perform.
The law requires that each person with a disability must be consulted prior to the planning and be involved in the implementation of an accommodation. Types of accommodations include:
- assistive devices
- modified work schedules
- job modifications
- or a change in the physical place
Examples of assistive devices often used in the work place include:
- Teletypewriter (TTY) or telephone amplifier, often used by persons with hearing impairments
- Wooden blocks to elevate desks and tables for wheelchair users
- Large-type computer terminals and Braille printers to assist persons with vision impairments
Decisions to implement an accommodation should include making a choice that will best meet the needs of the individual by minimizing limitation and enhancing his or her ability to perform job tasks, while serving the interests of your majority work force.
Know where accessible restrooms, drinking fountains and telephones are located. If such facilities are not available, be ready to offer alternatives, such as the private or employee rest- room, a glass of water or your desk phone.
Use a normal tone of voice when extending a verbal welcome. Do not raise your voice unless requested.
When introduced to a person with a disability, it is appropriate to offer to shake hands. People with limited hand use or who wear an artificial limb can usually shake hands. Shaking hands with the left hand is acceptable. For those who cannot shake hands, touch the person on the shoulder or arm to welcome and acknowledge their presence.
Treat adults in a manner befitting adults. Call a person by his or her first name only when extending that familiarity to all others present.
Never patronize people using wheelchairs by patting them on the head or shoulder. When addressing a person who uses a wheelchair, never lean on the person's wheelchair. The chair is part the space that belongs to the person who uses it.
When talking with a person with a disability, look at and speak directly to that person rather than through a companion who may be along. If an interpreter is present, speak to the person who has scheduled the appointment, not to the interpreter. Always maintain eye contact with the applicant, not the interpreter.
Offer assistance in a dignified manner with sensitivity and respect. Be prepared to have the offer declined. Do not proceed to assist if your offer to assist is declined. If the offer is accepted, listen to or accept instructions.
Allow a person with a visual impairment to take your arm (at or about the elbow). This will enable you to guide rather than propel or lead the person. Offer to hold or carry packages in a welcoming manner. Example: "May I help you with your packages?" Offer to hand a coat or umbrella, but do not offer to hand a cane or crutches unless the individual requests otherwise.
When talking to a person with a disability, look at and speak directly to that person, rather than through a companion who may be along. Don't be embarrassed if you happen to use accepted common expressions such as "See you later" or "Got to be running along" that seem to relate to the person's disability.
To get the attention of a person with a hearing impairment, tap the person on the shoulder or wave your hand. Look directly at the person and speak clearly, naturally and slowly to establish if the person can read lips. Not all persons with hearing impairments can lip-read. Those who can will rely on facial expression and other body language to help in understanding. Show consideration by placing yourself facing the light source and keeping your hands, cigarettes and food away from your mouth when speaking. Keep mustaches well-trimmed. Shouting won't help. Written notes may.
When talking with a person in a wheelchair for more than a few minutes, use a chair, whenever possible, in order to place yourself at the person's eye level to facilitate conversation. When greeting a person with a severe loss of vision, always identify yourself and others who may be with you. Example: "On my right is Mary Smith."
When conversing in a group, give a vocal cue by announcing the name of the person to whom you are speaking. Speak in a normal tone of voice, indicate in advance when you will be moving from one place to another, and let it be known when the conversation is at an end.
Listen attentively when you're talking to a person who has a speech impairment. Keep your manner encouraging rather than correcting. Exercise patience rather than attempting to speak for a person with speech difficulty. When necessary, ask short questions that require short answers or a nod or a shake of the head. Never pretend to understand if you are having difficulty doing so. Repeat what you understand, or incorporate the interviewee's statements into each of the following questions. The person's reactions will clue you in and guide you to understanding.
If you have difficulty communicating, be willing to repeat or rephrase a question. Open-ended questions are more appropriate than closed-ended questions. For example, when possible begin your question with "how" rather than "what." Examples:
Closed-Ended Question: "You were an administrative assistant in ARTS Company in the community planning division for seven years. What did you do there?"
Open-ended Question: "Tell me about your recent position as an administrative assistant."
Do not shout at a hearing impaired person. Shouting distorts sounds accepted through hearing aids and inhibits lip reading. Do not shout at a person who is blind or visually impaired -- he or she can hear you!
To facilitate conversation, be prepared to offer a visual cue to a hearing impaired person or an audible cue to a vision impaired person, especially when more than one person is speaking.
Interviewing & Scheduling Etiquette
Some interviewees with visual or mobility impairments will phone in prior to the appointment date, specifically for travel information. The scheduler should be very familiar with the travel path in order to provide interviewees with detailed information.
Make sure the place where you plan to conduct the interview is accessible by checking the following:
- Are there handicap parking spaces available and nearby?
- Is there a ramp or step-free entrance?
- Are there accessible rest-rooms?
- If the interview is not on the first floor, does the building have an elevator?
- Are there any water fountains and telephones at the proper height for a person in a wheelchair to use?
When scheduling interviews for persons with disabilities, consider their needs ahead of time:
- When giving directions to a person in a wheelchair, consider distance, weather conditions and physical obstacles such as stairs, curbs and steep hills.
- Use specifics such as left a hundred feet or right two yards when directing a person with a visual impairment.
Be considerate of the additional travel time that may be required by a person with a disability.
Familiarize the interviewee in advance with the names of all persons he or she will be meeting during the visit. This courtesy allows persons with disabilities to be aware of the names and faces that will be met.
People with disabilities use a variety of transportation services when traveling to and from work. When scheduling an interview, be aware that the person may be required to make a reservation 24 hours in advance, plus travel time. Provide the interviewee with an estimated time to schedule the return trip when arranging the interview appointment.
Expect the same measure of punctuality and performance from people with disabilities that is required of every potential or actual employee.
Interviewing Technique Etiquette
Conduct interviews in a manner that emphasizes abilities, achievements and individual qualities. Conduct your interview as you would with anyone. Be considerate without being patronizing. When interviewing a person with a speech impediment, stifle any urge to complete a sentence of an interviewee.
If it appears that a person's ability inhibits performance of a job, ask: "How
would you perform this job?" Examples:
Inappropriate: "I notice that you are in a wheelchair, and I wonder how you get around. Tell me about your disability."
Appropriate: "This position requires digging and using a wheelbarrow, as you can see from the job description. Do you foresee any difficulty in performing the required tasks? If so, do you have any suggestions how these tasks can be performed?"
Interviewing Courtesies for Effective Communication
Interviewers need to know whether or not the job site is accessible and should be prepared to answer accessibility-related questions.
Interviewing a person using Mobility Aids: Enable people who use crutches, canes or wheelchairs to keep them within reach. Be aware that some wheelchair users may choose to transfer themselves out of their wheelchairs (into an office chair, for example) for the duration of the interview. Here again, when speaking to a person in a wheelchair or on crutches for more than a few minutes, sit in a chair. Place yourself at that person's eye level to facilitate conversation.
Interviewing a person with Vision Impairments: When greeting a person with a vision impairment always identify yourself and introduce anyone else who might be present. If the person does not extend their hand to shake hands, verbally extend a welcome.
Example: "Welcome to the Arts Council's offices." When offering seating, place the person's hand on the back or arm of the seat. A verbal cue is helpful as well. Let the person know if you move or need to end the conversation. Allow people who use crutches, canes or wheelchairs to keep them within reach.
Interviewing a person with Speech Impairments: Give your whole attention with interest when talking to a person who has a speech impairment. Ask short questions that require short answers or a nod of the head. Do not pretend to understand if you do not. Try rephrasing what you wish to communicate, or ask the person to repeat what you do not understand. Do not raise your voice. Most speech impaired persons can hear and understand.
Interviewing a person who is Deaf or Hearing Impaired: If you need to attract the attention of a person who is deaf or hearing impaired, touch him or her lightly on the shoulder. If the interviewee lip-reads, look directly at him or her. Speak clearly at a normal pace. Do not exaggerate your lip movements or shout. Speak expressively because the person will rely on your facial expressions, gestures and eye contact. Note: It is estimated that only 4 out of 10 spoken words are visible on the lips. Place yourself placing the light source and keep your hands, cigarettes and food away from your mouth when speaking. Shouting does not help and can be detrimental. Only raise your voice when requested. Brief, concise written notes may be helpful.
In the US most deaf people use American Sign Language (ASL). ASL is not a universal language. It is a language with its own syntax and grammatical structure. When scheduling an interpreter for a non-English speaking person, be sure to retain an interpreter that speaks and interprets in the language of the person. If an interpreter is present, it is common for the interpreter to be seated beside the interviewer, across from the interviewee. Interpreters facilitate communication. They should not be consulted or regarded as a reference for the interview.
Do's and Don'ts
- Do learn where to find and recruit people with disabilities.
- Do learn how to communicate with people who have disabilities.
- Do ensure that your applications and other company forms do not ask disability-related questions and that they are in formats that are accessible to all persons with disabilities.
- Do consider having written job descriptions that identify the essential functions of each job.
- Do ensure that requirements for medical examinations comply with the ADA.
- Do relax and make the applicant feel comfortable.
- Do provide reasonable accommodations that the qualified applicant will need to compete for the job.
- Do treat an individual with a disability the same way you would treat any applicant or employee—with dignity and respect.
- Do know that among those protected by the ADA are qualified individuals who have AIDS, cancer, who are mentally retarded, traumatically brain-injured, deaf, blind and learning disabled.
- Do understand that access includes not only environmental access but also making forms accessible to people with visual or cognitive disabilities and making alarms and signals accessible to people with hearing disabilities.
- Do develop procedures for maintaining and protecting confidential medical records.
- Do train supervisors on making reasonable accommodations. Don't assume that persons with disabilities do not want to work.
- Don't assume that alcoholism and drug abuse are not real disabilities, or that recovering drug abusers are not covered by the ADA.
- Don't ask if a person has a disability during an employment interview.
- Don't assume that certain jobs are more suited to persons with disabilities.
- Don't hire a person with a disability if that person is at significant risk of substantial harm to the health and safety of the public and there is no reasonable accommodation to reduce the risk or harm.
- Don't hire a person with a disability who is not qualified to perform the essential functions of the job even with a reasonable accommodation.
- Don't assume that you have to retain an unqualified employee with a disability.
- Don't assume that your current management will need special training to learn how to work with people with disabilities.
- Don't assume that the cost of accident insurance will increase as a result of hiring a person with a disability.
- Don't assume that the work environment will be unsafe if an employee has a disability.
- Don't assume that reasonable accommodations are expensive.
- Don't speculate or try to imagine how you would perform a specific job if you had the applicant's disability.
- Don't assume that you don't have any jobs that a person with a disability can do.
- Don't assume that your work place is accessible.
- Don't make medical judgments.
- Don't assume that a person with a disability can't do a job due to apparent or non-apparent disabilities.
- Don't assume that all people who are disabled are alike; they have varied interests, backgrounds, abilities, learning styles, and needs for accommodation.
The term "disability" has traditionally held different meanings for different people. The definition found in the ADA is not simply a medical definition. It refers not only to physical and mental impairments, but attitudes towards disabilities, in an effort to provide civil rights protection for anyone discriminated against for any reason related to disability.
Disability is a general term used for functional limitation that interferes with a person's ability, for example, to walk, hear or lift. It may refer to a physical, mental or sensory condition.
Use "person with a disability", never cripple or cripples—the image conveyed is of a twisted, deformed, useless body.
Avoid using handicap, handicapped person or handicapped. Folklore (and some history) suggests that "handicap" comes from the English phrase "cap in hand" and was used in reference to beggars with disabilities who were officially licensed to beg because of their disabilities. These people were given special caps in which to collect money.
Instead of the word "handicap," you should use the word "disability" and use it only as an adjective, never as a noun, such as in the term "the blind" or "the handicapped." It is best to place the adjective after the noun, such as "person with a disability." This emphasizes the individual as being important, not the disability.
For example, say people with cerebral palsy, people with spinal cord injuries. Never identify people solely by their disability. Person who had a spinal cord injury, polio, a stroke, etc., or a person who has multiple sclerosis, muscular dystrophy, arthritis, etc., is proper.
Victim: People with disabilities do not like to be perceived as victims for the rest of their lives, long after any victimization has occurred. Say person who has a disability, has a condition of (spina bifida, etc.), or born without legs, etc.
Never say defective, defect, deformed, or vegetable. These words are offensive, dehumanizing, degrading and stigmatizing.
Deafness VS Hearing Impairment: Deafness/hearing impairment. Deafness refers to a person who has a total loss of hearing. Hearing impairment refers to a person who has a partial loss of hearing within a range from slight to severe.
Hard of hearing describes a hearing-impaired person who communicates through speaking and spearheading, and who usually has listening and hearing abilities adequate for ordinary telephone communication. Many hard of hearing individuals use a hearing aid.
Deaf and dumb is as bad as it sounds. The inability to hear or speak does not indicate intelligence. Say person who has a mental or developmental disability. The words retarded, moron, imbecile, idiot are offensive to people who bear the label.
Confined/restricted to a wheelchair or wheelchair bound should be avoided. Most people who use a wheelchair or mobility devices do not regard them as confining. They are viewed as liberating; a means of getting around. Say uses a wheelchair or crutches; a wheelchair user; or walks with crutches.
Instead of able-bodied; able to walk, see, hear, etc.; say "people who are not disabled." Healthy, when used to contrast with "disabled," implies that the person with a disability is unhealthy. Many people with disabilities have excellent health. If needed, it's better to say people who do not have a disability.
Normal: When used as the opposite of disabled, implies that the person with a disability is abnormal. No one wants to be labeled as abnormal. The truth is that most people who are disabled are even more like than unlike those who are not disabled.
Do not use "afflicted with, suffers from." Most people with disabilities do not regard themselves as afflicted or suffering continually. It is acceptable to say, a person who has (name of disability).
Afflicted is not a good term to use either. A disability is not an affliction, although an affliction may have caused the disability.
Americans With Disabilities Act Barrier Removal Tax Credit and Deductions
The federal government conducted a survey in the late 1970s to determine the costs of accessibility in federal facilities impacted by the laws at that time. They found that the maximum cost of accessibility in new construction was less than 1% when accessibility was considered at the beginning of the project.
Accessibility can become expensive if it is ignored until late in the process when changes and compromises need to be made. Then the cost of trying to build accessibility into an existing building can be significant. To encourage building owners to make modifications under these circumstances, congress authorized tax incentives to encourage barrier removal.
The Federal Government has changed the tax code to help businesses improve accessibility. Congress legislated the annual tax credit of $5,000; for the purpose of enabling eligible small businesses to comply with applicable requirements under the ADA of 1990 (Section 44 of Internal Revenue Code).
Any qualified expenditures made after November 5, 1990, the date of enactment, are eligible for the Section 44 credit. Additionally, Section 190 of the Internal Revenue Code allows $15,000 to be deducted annually for qualified architectural and transportation barrier removal expense. This provision became effective with tax year 1991.
A small business may elect to take a general business credit of up to $5,000 annually for eligible access expenditures to comply with the requirements of ADA. Small business is defined as a business with gross receipts of $1 million or 30 or fewer full-time employees.
Expenditures must be geared toward ADA compliance and must be reasonable and necessary expenses. Included are amounts related to removing barriers, providing interpreters, readers or similar services and modifying or acquiring equipment and materials.
The amount that may be taken as a credit is 50% of the amount exceeding $250, but less than $10,250 per tax year. For instance, if $7,500 is spent to provide an interpreter, the credit would be $3,625 ($7,500 minus $250 divided by 2).
A business may take this credit each year it makes an accessibility improvement, be it purchase of equipment, provision of communication assistance or removal of an architectural barrier. This tax credit, called the Disabled Access Tax Credit, should be claimed on IRS Form 8826.
Section 190 applies to all businesses and has a narrower base for deductions. Qualified expenditures for the removal of architectural and transportation barriers include expenses specifically attributable to the removal of existing barriers (such as steps or narrow doors) or inaccessible parking spaces, bathrooms and vehicles. They may be fully deducted, up to a maximum of $15,000 for each taxable year. Expenses from the construction or comprehensive renovation of a facility or vehicle or the normal replacement of depreciable property are not included.
For further information contact your local IRS Office or:
Internal Revenue Service
111 Constitution Avenue, NW
Washington, D.C. 20024
Office on the ADA
Civil Rights Division
US Dept. of Justice
PO Box 66118
Washington, DC 20035-6118
202-514-0301 (V), 202-514-0383 (TDD)
Action Plan For Access Compliance
- Become Knowledgeable
Using this handbook or another source, prepare a "good faith plan" for immediate barrier removal.
- Survey Existing Conditions
Assemble a survey team including people with disabilities to assist in identifying barriers and developing solutions. You will need site and floor plans for making notes and a tape measure.
- Summarize the Results
List all identified barriers and indicate the actual dimensions/conditions of each.
- Consider Possible Solutions
Brainstorm ideas for barrier removal and determine probable costs for options. Decide which solutions best eliminate barriers at a reasonable cost. Consider practical alternatives.
- Prioritize Barrier Removal
Priority 1: Accessible entrances into the facility and path of travel to reach those entrances
Priority 2: Access to goods and services
Priority 3: Access to restrooms
Priority 4: Any other measures necessary to provide access
- Remove All Barriers Identified as "Readily Achievable"
A "Checklist for Readily Achievable Barrier Removal" is available through the Disability Access Office for use in completing a survey of potential architectural and communication barriers. The Iowa Arts Council also has an Accessibility Planning Guide; call or write for a copy.
- Put a "Good Faith" Action Plan In Place
It is critical to demonstrate a "good faith" effort which includes documentation of everything you have done and how you plan to address future compliance requirements.
- Utilize A Process For Continuing Accessibility
Review your implementation plan each year to reevaluate whether more improvements have become readily achievable. Set a date for your review and assign people who will be responsible for completion.
Signage: If accessible facilities are identified as such, then the international symbol of accessibility should be used. Room numbers and names signage shall consist of color contrasting characters between 5/8 to 2 inches high, raised 1/32 inch minimum, and mounted alongside the door on the handle-side no more than 8 inches from door jamb and at a height of 60 inches above the floor. Accessibility symbols can also designate levels of access for events and locations so that people can decide beforehand if they will be comfortable with the accommodations.
Telephones: If public telephones are provided, then at least one unit per floor shall be accessible. Mount at 54 inches maximum to controls for side approach or 48 inches maximum for frontal approach. Equip with volume control. Text telephones (TDDs) shall be permanently affixed within or adjacent to the enclosure, or a portable TDD should be available.
Alarms: Any electronically controlled device used for emergency warning must be visible in addition to audible.
Hazardous Areas: Uniform warning textures shall be placed on floors and door handle surfaces to hazardous areas such as stairways. This can be done by adhering a rough material to the floor surface (36 inch minimum width) and door handle.
Frequently Asked Questions
Who is protected by the ADA?
The ADA covers people with both physical and emotional disabilities. A person is considered to be disabled or to have a disability if he or she: has a physical or mental impairment that substantially limits one or more of his or her major life activities; has a record of such an impairment; or is regarded as having such an impairment.
What must my employer do to accommodate my disability?
Under the ADA, no covered employer may discriminate in hiring, promoting or laying off any person with a disability. Employers must make reasonable accommodations for their disabled employees so as to allow them to perform their jobs efficiently and safely.
Is there a guide and/or reference on new construction and remodeling
jobs requiring accessible design?
Many small jobs, such as electrical outlets, stair work, door changes, etc. will continue to be done without formal design. City maintenance crews will need to be aware of requirements in order to carry out these typical jobs. It should be remembered guidelines are the bare essentials of a barriers elimination program. You should consult your local building inspections department on any questions or details that might arise from these types of jobs.
Must an employer modify existing facilities to make them accessible?
An employer may be required to modify facilities to enable an individual to perform essential job functions and to have equal opportunity to participate in other employment-related activities.
What kinds of signs should I use?
The international symbol of accessibility should be used for all your accessible facilities, e.g., restrooms, room numbers and names.
Is it expensive to make all new construction of public facilities accessible?
The cost of incorporating accessibility features in new construction is less than one percent of construction costs. This is a small price in relation to the economic benefits from full accessibility in the future, such as increased employment and consumer spending and decreased welfare dependency.
Civil Rights Commission
211 E Maple, 2nd Floor
Des Moines IA 50309
Disability Determination Services Bureau
510 E 12th St
Des Moines IA 50309
Governor's Developmental Disabilities Council
617 E 2nd St
Des Moines IA 50309
Governor's DD Council provides services to people with developmental disabilities and their families. Information and guidance is available in a supportive and caring environment to help them live fulfilling lives in their communities.
Iowa Department for the Blind
524 4th St
Des Moines, IA 50309
The Iowa Department for the Blind works in partnership with Iowans who are blind or visually impaired to reach their goals.
Iowa Dept of Human Rights Division of Persons with Disabilities
321 E 12th
Lucas Bldg - 1st Floor
Des Moines IA 50319
Division of Deaf Services
The Iowa Department of Human Rights was created to provide consolidated administration of and support for various advocacy activities and related services.
Iowa Department of Human Services Mental Health/Developmental Disabilities
1305 E Walnut, Hoover Bldg
The Iowa Department of Human Services provides financial, health, and human services that promote the greatest possible independence and personal responsibility for all clients.
Iowa Division of Vocational Rehabilitation Services
510 E 12th
Des Moines IA 50309
Through the Iowa Division of Vocational Rehabilitation, Iowans with disabilities are attempting to be more independent, more productive and more involved in their communities.
Relay Iowa is a telecommunications relay service that links deaf and hard of hearing people via the telephone. The center is in operation seven days a week, 24 hours a day. It provides relay service for telephone calls, personal or business, to or from deaf, hard of hearing, or speech-impaired telephone customers.
400 E 14th, Grimes Bldg
Des Moines, IA 50319
VSA Iowa has statewide programs providing arts opportunities for preschoolers through 90-year-olds. The mission of VSAI is to provide quality arts opportunities for people with special needs.
Other Disability/Accessibility Resources
AbleData offers information regarding assistive technology. From the US Dept. of Education, National Institute on Disability and Rehabilitation Research, they put assistive technology and disability related resources at your fingertips. Contact them at 800-227-0216 or www.abledata.com.
The Access Board (Architectural and Transportation Barriers Compliance Board), created in 1973, has served the nation as the only independent federal agency whose primary mission is accessibility for people with disabilities. Visit them at www.access-board.gov.
You can view the full text of the ADA from the U.S. Department of Justice, Americans with Disabilities Act Document Center. This award-winning site contains ADA Statute, regulations, ADAAG (Americans with Disabilities Act Accessibility Guidelines), federally reviewed tech sheets, and other assistance documents. Visit them at www.usdoj.gov/crt/ada.
The Adaptive Technology Resource Centre is sponsored by the University of Toronto, Canada. Their glossary of adaptive technology is especially useful. Check them out at www.utoronto.ca/~ic.
The Alliance for Technology Access is a network of community-based resource centers that provides information and support services to children and adults with disabilities and helps increase their use of standard, assistive, and information technologies. Visit them at www.ataccess.org
The Center for Universal Design is committed to the design of products and environments to be usable by all people. It is one of the most effective ways to ensure access. A national research, information, and technical assistance center that evaluates, develops, and promotes accessible and universal design in buildings and related products, visit them at www.ncsu.edu/design/cud.
The Disability Access Symbols is produced by the Graphic Artists Guild Foundation, these 12 symbols may be used to promote and publicize accessibility of places, programs and other activities for people with various disabilities. You can download the symbols on your computer or purchase them on disk. Contact them at 212-463-7730 or visit their site at www.gag.org/das.
Easy Access for Students and Institutions (EASI) provides information and guidance in the area of access-to-information technologies by individuals with disabilities. Check them out at 800-433-3243 or http:/easi.ed.gov.
The Job Accommodation Network (JAN on the Web) is an international consulting service that provides information about job accommodations and the employability of people with disabilities. JAN in the US is a service of the President's Committee on Employment of People with Disabilities. You can reach them at 800-526-7234 (V) or http://janweb.icdi.wvu.edu.
The Virtual Assistive Technology Center offers resources and is a place to
download computer software that provides access to technology for disabled persons.
Check them out at http://www.at-center.com. | http://www.iowaartscouncil.org/publications_&_resources/guides/accessibility_guides/disability_etiquette.shtml | 13 |
16 | The Terrific Twelve Economic Concepts
The Terrific Twelve Economic Concepts Students Should Know
The Significant Seven plus:
Economic Systems: People and societies develop economic systems to deal with the basic economic problems raised by scarcity and opportunity costs. In Particular, economic systems answer these three basic economic questions: what to produce, how to produce, and for whom to produce. The three fundamental economic systems are market, command, and traditional. A simple illustration of a pure market economy is the Circular Flow Model.
Productivity: Productivity is the amount of output per unit of input used. The most common measure of productivity is labor productivity - output per hour worked.
Price (Supply and Demand): The price of a good or service is always determined by the interaction of supply and demand. Supply - The amounts of goods and services that people are willing and able to supply at various prices. Demand - The amounts of goods or services that people are willing and able to buy at various prices.
Money: Money is anything that is generally acceptable in exchange for goods and services. Money does not necessarily need to have any intrinsic value to serve as a medium of exchange. It is someone's willingness to accept it in payment that gives money its value in the exchange process.
Profit: Profit is the income left over for the owners of a business after all the costs of production have been paid. Profit is an incentive for entrepreneurs and a reward for taking a risk to start a business. Businesses who make a profit have successfully satisfied the wants of consumers. | http://www.niu.edu/icee/students_assess3.shtml | 13 |
21 | In order to fully understand the mechanisms of human physiology it is important to have an understanding of the chemical composition of the body. This will come in handy when considering the various interactions between cells and structures. We will gloss over the basic chemistry; however, if there are specific questions with regards to chemistry and its effect on biological function feel free to ask on the forum.
An atom is the smallest unit of matter with unique chemical properties. Atoms are the chemical units of cell structure. They consist of a central nucleus with protons and neutrons and orbit(s) of electrons. A proton carries a +1 positive charge, while a neutron has no charge. Thus the nucleus has a net positive charge. Electrons carry a –1 negative charge and are consequently attracted to the positive nucleus. In general, the number of protons usually equals the number of electrons. Recall that atoms have unique (individual) chemical properties, and thus each type of atom is called a chemical element, or just element.
Atomic number refers to the number of protons in an atom, while atomic weight refers to the number of protons and neutrons in an atom, measured in daltons. It is possible for elements to exist in multiple forms, called isotopes; the only difference is the number of neutrons in the nucleus, while protons and electrons always stay the same as the original element.
The human body depends upon four major elements for form and function: Hydrogen (H), Oxygen (O), Carbon (C), and Nitrogen (N).
Atoms form molecules when two or more are bonded together.
A1—bond—A2 = Molecule: A1A2
Covalent bonds are formed when electrons in the outer orbit are shared between two atoms. With this type of bond formed, molecules can rotate around their shared electrons and change shapes. Every atom forms a characteristic number of covalent bonds. The number of bonds depends on the number of electrons in the outer orbit.
Hydrogen (H) has atomic number 1, with 1 electron in its outer orbit. Hydrogen forms 1 bond (single bond) meaning: 1 electron is shared.
Oxygen (O) has atomic number 8, with 6 electrons in its outer orbit. Thus Oxygen forms 2 bonds (double bond) meaning: 2 electrons are shared.
Nitrogen (N) has atomic number 7, with 5 electrons in its outer orbit. Nitrogen forms 3 bonds (triple bond) meaning: 3 electrons are shared.
Carbon (C) has atomic number 6, with 4 electrons in its outer orbit. Carbon forms 4 bonds, meaning: 4 electrons are shared.
In general: # of electrons in outer orbit + Shared electrons = 8 (full octet)
Make note that any electron shared is in attempt to reach a stable state. In most atoms this is an octet, or eight electrons in the outer orbit. Note Hydrogen only has space for 2 electrons in its outer orbit, one present and one shared.
Ions are atoms with a net electric charge due to the gain or loss of one or more electrons. Ionic bonds are bonds formed between two oppositely charged ions. Cations are ions with a net positive charge, while anions are those with a net negative charge.
Ionic forms of elements are important to the body, as they are able to conduct electricity when dissolved in water. Theses ions are called electrolytes. Single atoms, or atoms that are covalently linked in molecules can undergo ionization. See examples below.
NaCl ↔ Na+ + Cl-
R-COOH ↔ R-COO- + H+
R-NH2 + H+ ↔ R-NH3
Where R is any molecule attached to the shown functional group.
An atom with a single electron in its outermost orbital is known as a free radical. Free radicals are highly reactive and short-lived. In organism terms, they are responsible for cellular breakdown. Sun damage is a classic example of free radicals acting on skin cells.
Polar bonds are bonds in which the electrons are shared unequally. The unequal sharing gives the atom with the higher share a more negative charge and the one with the lower share of electrons has a slightly more positive charge.
Hydrogen bonds are weak bonds between the hydrogen atom (more positive, lesser share of the electron) in one polar bond and an oxygen or nitrogen atom (more negative, greater share of the electron) in another polar bond.
H--O--H - - - O---H
Molecule 1 Molecule 2
Hydrogen bond between hydrogen of one water molecule and the oxygen of another. These bonds are rather weak.
Water is the most common molecule in the human body (~98-99%). Both hydrogen atoms are attached to the single oxygen atom by polar bonds. The oxygen has a slightly negative charge and the hydrogen atoms each have a slightly positive charge. This allows for hydrogen bonds to form between the positive hydrogen atoms and the negative oxygen atoms of neighboring water molecules. The state of water is determined by the weak hydrogen bonds. The bonds remain intact in low temperatures and the water freezes. When the temperature rises the bonds weaken and water becomes a liquid. If the temperature is high enough the bonds will completely break and water becomes a gas.
Substances dissolved in a liquid are called solutes, while the liquid itself is called the solvent. The term solution refers to the final product when solutes dissolve in a solvent.
Since water is the most common molecule in the human body, it should be no surprise that water is the most abundant solvent. In the body, a majority of the chemical reactions involve molecules dissolved in water. Hydrophilic (water-loving) molecules are molecules that easily dissolve in water. Generally, hydrophilic molecules have polar groups (e.g., OH-) and/or ionized (e.g., COO- or NH2+) functional groups attached. In contrast, molecules that are not attracted to water are called hydrophobic molecules (water-fearing). They are molecules with electrically neutral covalent bonds (e.g., molecules with carbon chains). When non-polar molecules are mixed with water two phases (layers) are formed. A good example is mixing oil and water and then allowing the container to set for a while. There will be two distinct layers visible.
Molecules with a polar/ionized region and one end and a non-polar region at the other end are called amphipathic, as the molecule has both hydrophilic and hydrophobic characteristics. If amphipathic molecules are mixed with water, the molecules form clusters with the polar (hydrophilic) regions at the surface, where they will come into contact with water, and the non-polar (hydrophobic) regions nestled in the center of the cluster away from contact with water. The arrangement will increase the overall solubility in water.
With regards to solutions, concentration is the amount of solute present in a unit volume of solution. Concentration values do not reflect the number of molecules present.
An acid is a molecule that releases protons (hydrogen ions) in solution. Conversely, a base is a molecule that can accept a proton. Acids and bases can be further divided into strengths. A strong acid is an acid that releases all of its hydrogen ions in solution. Hydrochloric acid (HCl) is an excellent example of a strong acid. Weak acids are those which do not completely ionize, or lose their hydrogen ions, in solution. The concentration of free hydrogen ions (protons) is referred to as the acidity of the solution. The unit is pH = -log [H+] where [H+] is the concentration of free hydrogen ions. pH is a very important concept in biological systems, and certainly holds great weight in the processes of human physiology. Pure water is called a neutral solution, and has a pH value of 7. Alkaline solutions are also known as basic solutions and thus have a lower concentration of hydrogen ions [H+]. The pH of alkaline solutions is greater than 7. Acidic solutions have a high concentration of hydrogen ions [H+]. The pH of acidic solutions is less than 7. Each number on the pH scale indicates a 10-fold change in hydrogen concentration [H+]. Litmus papers are test strips that determine pH based upon color changes in the paper, after the strip is dipped into a solution.
Organic molecules contain carbon backbones. Every carbon atom will form 4 covalent bonds with other atoms, specifically other carbon atoms as well as hydrogen, nitrogen, oxygen and sulfur atoms. By linking together of many smaller molecules, carbon is able to form very large polymers (macromolecules) many of which are important to human physiology.
These important carbon-based molecules are vital to life in that they provide cells with energy. Carbohydrates are composed of carbon, hydrogen and oxygen in a set proportion. Where n is any whole number, the formula is: Cn(H2O)n .
H— C —OH
Carbohydrates are easily soluble in water due to the polar hydroxyl (OH-) groups. Most are sweet tasting and are also known by the common name: sugar.
Monosaccharides are the simplest sugars. Glucose (C6H12O6) is the most abundant, and is called blood sugar because it is the major monosaccharide in blood. The common monosaccharides in the body contain 5 or 6 carbon atoms and are called pentoses and hexoses, respectively.
Disaccharides are carbohydrates composed of two monosaccharides linked together. Sucrose is composed of glucose and fructose. Maltose is composed of glucose and glucose chains. Lactose, milk sugar, is composed of glucose and galactose.
An oxygen atom links together monosaccharides by the removal of a hydrogen atom from one end and a hydroxyl group from the other. The hydroxyl group and the hydrogen combine to form a water molecule. Therefore, hydrolysis of a disaccharide will break the link formed and disconnect the two monosaccharides.
Polysaccharides are formed when many monosaccharides link together into long chains. Glycogen in animal cells and starch in plant cells are both composed of thousands of glucose molecules linked together.
Fats to the layman. Lipids are predominantly composed of hydrogen and carbon atoms linked together by neutral covalent bonds. Lipids are non-polar and are consequently are not very soluble in water. There are four main classes of lipids to be aware of in learning about human physiology.
Fatty acids are chains of carbon and hydrogen atoms with a carboxyl group at one end. Generally, they are made of an even number of carbon atoms because they are synthesized by linking together fragments composed of two carbon atoms. If all the carbon atoms are linked by single covalent bonds the chain is called a saturated fatty acid. If the chain is composed of double bonds, the chain is called an unsaturated fatty acid. Furthermore, if only one double bond is present in the chain, then it is a monounsaturated fatty acid, while if there is more than one double bond present it is called a polyunsaturated fatty acid.
Triacylglycerols, or triglycerides, account for the majority of lipids in the body. They are formed by linking each of the 3 hydroxyl groups of glycerol with the carboxyl groups of three fatty acids, hence the “tri” in the name. When a triacylglycerol is hydrolyzed, the fatty acids are released from the glycerol and the products can be metabolized in order to provide energy for cell functions.
Triacylglycerols have a near relative called phospholipids. The only difference is that one of the hydroxyl groups of the glycerol is linked to a phosphate. A phospholipid has a non-polar region in the fatty acid, thus the molecule is amphipathic. Phospholipids are very important in building membranes within the body.
Finally, steroids are composed of 4 interconnected carbon atom rings. They may have a few polar hydroxyl groups attached to the rings. Steroids are not soluble in water due to their polarity. Sex hormones, such as testosterone and estrogen, are examples of steroids, as well as cholesterol and cortisol.
In addition to the common four elements of carbon, hydrogen, oxygen and nitrogen, proteins also contain sulfur and other elements in small amounts. Proteins are very large molecules of linked subunits called amino acids. They form very long chains.
Amino acids are composed of an amino (NH2) and a carboxyl (COOH) group that are linked to a terminal carbon atom. Where R is another functional group or carbon chain, known as the amino acid side chain.
The proteins in living organisms are composed of the same set of 20 amino acids. Each amino acid is distinguished by its side chain (R).
As amino acids are joined together with peptide bonds they are forming a polypeptide, or a sequence of amino acids linked by peptide bonds. A peptide bond occurs when the carboxyl groups of one amino acid forms a polar covalent bond with the amino group of another amino acid. In the formation of this bond one water molecule is releases. The newly formed molecule will then have a free amino group at one end and a free carboxyl group at the other, which allows for linking additional amino acids.
Glycoproteins are made when monosaccharides are covalently bonded to the side chains of specific amino acids in the protein (polypeptide). The specific amino acids that are singled out in the formation of a glycoprotein are serine and threonine.
Two things determine the primary structure of a protein.
The number of amino acids in the chain
Where each specific amino acid occurs in the chain.
It is important to remember that a polypeptide chain is flexible as each amino acid can rotate around its peptide bonds. Therefore, polypeptide chains can be bent into a number of shapes or conformations. The three dimensional conformation of a protein plays an important role in its functioning in the body.
Conformation of proteins is determined by several factors:
Hydrogen bonding between neighboring parts of the chain and any water molecules
Any ionic bonds between polar and ionized parts along the chain
Weak bonds called van der Waals forces between neighboring non-polar regions of the chain
Covalent bonds linking side chains of two amino acids
An alpha helix conformation is formed when hydrogen bonds form between the hydrogen linked to the nitrogen in one peptide bond and the double bonded oxygen in another. The hydrogen bonds contort the chain into a coil. When hydrogen bonds form between peptide bonds in regions of the polypeptide chain that runs parallel, a straight and extended region forms called a beta sheet conformation. The alpha helix and the beta sheet conformations are very common. When ionic bonds form between side chains, and thus interrupt with any repetitive hydrogen bonding, irregular regions called loop conformations may occur.
It is worth knowing that multimeric proteins are proteins consisting of more than one polypeptide chain. The chains can be similar or different.
Nucleic acids store, transmit and express genetic information. Nucleic acids are composed of subunits called nucleotides. Nucleotides contain a phosphate group, a sugar and a ring of carbon and nitrogen atoms. The ring is also known as the base because it can accept hydrogen ions (protons). Nucleotides are linked together by bonds between the phosphate group of one nucleotide and the sugar of the next one. In this fashion, nucleotides form long chains. DNA (deoxyribonucleic acid) stores genetic information in the sequence of the nucleotide subunits. RNA (ribonucleic acid) uses the information stored in DNA to write the instructions for linking together specific sequences of amino acids in order to form polypeptides per original DNA instructions.
DNA nucleotides contain a five carbon sugar called deoxyribose. DNA has four different nucleotides that correspond to four different bases. The purine bases adenine (A) and guanine (G) are composed of two fused rings of nitrogen and hydrogen. The pyrimidine bases cytosine (C) and thymine (T) which are made of only one ring of nitrogen and hydrogen. Guanine and cytosine pair, while thymine and adenine pair. One purine paired with one pyrimidine.
A DNA molecule looks like a double helix. It consists of two chains of nucleotides coiled around each other held by hydrogen bonds between a purine base on one chain and a pyrimidine base on the other.
RNA is slightly different than DNA. Specifically, RNA is a single chain of nucleotides, contains the sugar ribose, and the pyrimidine base uracil is present instead of thymine. Uracil can therefore pair with the purine adenine. | http://www.biology-online.org/9/1_chemical_composition.htm | 13 |
26 | There are just under one-hundred elements that occur in nature, but there are millions and millions of different compounds. Your hair, the air you breathe, the ink in a pen, the components of paper, the metals and plastics making your computer, and the water in the fountains are just a very few examples of the chemicals that exist everywhere in everything you touch. For us to be able to talk about and control them, we must be able to organize them. And we organize them by what kinds of bonds hold them together. The bonds holding atoms together give any chemical it's very basic general characteristics.
A chemical bond occurs when two atoms are held together by mutual attraction to the same electrons. This attraction is balanced by the repulsion of the nuclei for each other, and the repulsion of the electrons for each other. Since each element is a unique combination of protons and electron arrangements, atoms of each element have a slightly different attraction for electrons. This relative attraction for shared electrons has been tabulated as electronegativity, which is a 0 to 4 scale. The atom with the strongest attraction for shared electrons is fluorine, with an electronegativity of 3.98, and the lowest electronegativity is that of Francium (0.7). The symbol for electronegativity is c (that's "c" in the symbol font).
The type of bond formed between two atoms depends on their electronegativity. Atoms with strong attractions for electrons are non-metals, and tend to form anions (negative ions). Atoms with weak attractions for electrons are metals, and tend to form cations (positive ions). Atoms with moderate attraction for shared electrons are known as transition metals. There is one family of atoms that have virtually no attraction for electrons, but also do not give up the ones they have, called the noble gases. [Click here for a review of metals and non-metals and their properties.]
There are some atoms that do not have electronegativities listed. Most of these the atoms are so rare that it has not been possible to gather experimental data. A few of them (the noble gases, see below) do not form any compounds, and so a relative attraction for shared electrons cannot be calculated.
Chemical bonds occur when electrons end up paired with each other, and the bonded atoms always have lower total energy than the separated atoms.
If two atoms have very different attractions for electrons, then one of them will "steal" the electrons from the other. These two atoms are then "stuck" together by their opposite charges, in what is known as an ionic bond. Atoms in ionic compounds do not need exactly opposite charges; for example, calcium chloride has the formula CaCl2 and consists of calcium ions with a 2+ charge and chloride ions with a 1- charge. There will be enough of each ion so the overall charge is zero.
Also, ionic substances always have their ions in specific ratios (like in calcium chloride above: 1 Ca : 2 Cl), but they do not exist as molecules. Instead, they exist as a crystal lattice, which is a regularly constructed arrangement of positive and negative ions. These lattices can be any size, from sub-microscopic to many feet across, but for a given compound, they all have the same chemical properties.
Because an ionic compound does not exist as a molecules of a specific size, we cannot calculate a molecular weight. We may still need to know how much of a certain compound we have, so for ionic compounds we calculate formula mass. This is calculated and used the same way as molecular weight, but it tells us the mass of a single "formula unit" of a substance. For table salt, even though there are no specific Na-Cl pairs, we still add up the mass of one sodium atom and one chlorine atom, because they make up the formula unit.
Ionic compounds that dissolve in water and break into their individual ions are known as electrolytes because the resulting solution conducts electricity. Ionic compounds that do not dissolve in water are called non-electrolytes, and tend to involve larger ions. Almost all ionic compounds are solids at room temperature. [If you can think of one that isn't, e-mail me.]
Two atoms with similar strong attactions for electrons can't "steal" them from each other, so they must "share" electrons. This is known as a covalent bond. Generally, atoms will form a covalent bond if they are both non-metals. Also, many metals (except from the first two columns) will form covalent bonds with non-metals, because even though they have opposite tendencies to form ions, they are not so different that complete transfer of an electron will take place.
Although some covalent compounds dissolve in water, like sugar or vinegar, their solutions do not conduct electricity because they do not break into ions. It may seem that this describes a non-electrolyte ionic compound, but there is another difference: covalent compounds usually form molecules, which are the smallest unit of a compound with all the properties of that compound. Unlike lattices, molecules have definite composition. A molecule of water has the formula H2O, which means that there is a tiny unit with exactly two hydrogen atoms connected to exactly one oxygen atom. A different number of either atom would be a different compound with different properties.
Covalent compounds are often liquid or gas at room temperature, and the ones that are solid are often soft or waxy.
We have seen that a given pair of atoms can either both strongly attract electrons (covalent bond), or one can strongly attract electrons away from the other (ionic bond). There is a third possibility that occurs if neither atom involved has a strong attraction for other electrons. These atoms are metals, and the resulting situation is known as a metallic bond. In this case, many atoms will be sharing valence electrons, but so weakly that the electrons do not "belong" to any specific nucleus. The collection of atoms acts like a clump of chocolate chip cookie dough, with each chip being a nucleus, and the dough being the electrons. They are all held together, and they hold a consistent shape, unless you push on them. In that case, the whole system deforms, but the same nuclei and electrons are still there. That's a model for malleability.
We won't be worrying too much about metallic bonds, except to say that two or more metals together form metallic bonds.
How Can We Tell?
There are two ways to decide which type of bond is involved in a given compound. There is a "rule of thumb" method, and a "calculation" method. They're both pretty easy. "Rule of Thumb" means an easy pattern to remember.
|The "rule of thumb" method requires you to mentally divide the periodic table into four regions, shown below.|
|active metal||+||active metal||makes||a metallic||bond|
|active metal||+||transition metal||makes||a metallic||bond|
|active metal||+||non-metal||makes||an ionic||bond|
|transition metal||+||non-metal||makes||a covalent or ionic||bond|
|transition metal||+||transition metal||makes||a metallic||bond|
In other words, the closer together they are on the right of the table, the more likely they are to form covalent bonds.
The calculation method requires a periodic table with electronegativities listed (like the colorful ones you're all supposed to have!). To find out what type of bond two atoms will form, subtract their electrongativities (big minus small). If the difference is bigger than 1.67, it's ionic. If the difference is less, it's covalent or metallic (two metals).
|Here's another way of thinking of it.|
|If two atoms have||strong similar||attractions for electrons, they will form||covalent||bonds||(two non-metals)|
|weak similar||attractions for electrons, they will form||metallic||bonds||(two metals)|
|very different||attractions for electrons, they will form||ionic||bonds||(metal + non-metal)|
|If any atom has||NO||attractions for electrons, they will form||NO||bonds||(noble gases)|
|type of bond using "rule of thumb" method||type of bond using calculation method|
|Li & Cl||________________________||________________________|
|C & Cl||________________________||________________________|
|Fe & S||________________________||________________________|
|Ba & I||________________________||________________________|
|N & O||________________________||________________________|
|Ga & Br||________________________||________________________|
|Fe & Ne||________________________||________________________|
|Ru & N||________________________||________________________|
|Ni & Cu||________________________||________________________|
|Be & S||________________________||________________________|
Did the two methods give different results?
Polarity Of Bonds
When calculating the electronegativity difference to determine whether a given bond is covalent or ionic, it may have occured to you that sometimes the difference may be barely enough to be ionic or covalent. What if the difference is exactly 1.67? Well, it helps to remember that even in the most ionic bond, there is still a little bit of time the "lost" electron spends around the positive ion. As the difference in electronegativity approaches zero, the sharing of the electron becomes more and more even, and the bond becomes less and less ionic.
To clear up these possibilities a little more, we call any bond between two non-metals with the same electronegativity a pure covalent bond. If the atoms have similar (but different) electronegativities, they are said to form a polar covalent bond. In these cases, the electron spends a bit more time closer to the more electronegative atom. For example, in water, the H-O bond is polar, with the oxygen "hogging" the electron. When the hogging is extreme, we have an ionic bond.
So what if the difference is 1.68? Or 1.66? Well, call it what the calculation tells you (ionic for the first one, covalent for the second), or use the rule of thumb, and show how you reached your decision. It is unlikely anyone will argue with your logic.
[http://www.sparknotes.com/chemistry/bonding/intro/][MHS Chem page] | http://www.dbooth.net/mhs/chem/bonding.html | 13 |
64 | - A contraction is a shortened form of one or two words (one of which is usually a verb) Some contractions are: I'm (I am), can't (cannot), how's (how is), and Ma'am (Madam). — “Contractions: ”,
- A contraction stress test checks to see if your baby (fetus) will be okay with the reduced oxygen levels that normally occur during contractions when you are in labor. — “Contraction Stress Test”,
- Contraction definition, an act or instance of contracting. See more. — “Contraction | Define Contraction at ”,
- Muscle contraction is the response a muscle has to any kind of stimuli where the result is shortening in length and development of force. — “Muscle Contraction Information on Healthline”,
- contraction n. The act of contracting or the state of being contracted. A word, as won't from will not, or phrase, as o'clock from of the clock,. — “contraction: Definition from ”,
- A contraction is a written form in which a number of words are combined into a new word. In English, contractions are usually represented with an apostrophe to replace the omitted letters that join the words together. — “What is a Contraction?”,
- Premature ventricular contraction. You don't need to be Editor-In-Chief to add or edit content to WikiDoc. You can begin to add to or edit text on this WikiDoc page by clicking on the edit button at the top of this page. Next enter or edit the information that you would like to appear here. — “Premature ventricular contraction - wikidoc”,
- A muscle contraction (also known as a muscle twitch or simply twitch) occurs when a muscle fibre generates tension through the action of actin and myosin cross-bridge cycling. While under tension, the muscle may lengthen, shorten or remain the. — “Muscle contraction - Starting Strength Wiki”,
- Even though Dupuytren's contraction is not the correct term, it does convey the fundamental idea of this disease process which is the contraction or shortening of the Dupuytrens contraction refers to what happens to the deep tissue. — “Dupuytrens Contracture Treatment with Alternative Medicine”, dupuytrens-
- A muscle contraction (also known as a muscle twitch or simply twitch) occurs when a muscle cell (called a muscle fiber) lengthens or shortens. Locomotion in most higher animals is possible only through the repeated contraction of many muscles at. — “Muscle contractions - Psychology Wiki”,
- Definition of contraction in the Online Dictionary. Meaning of contraction. Pronunciation of contraction. Translations of contraction. contraction synonyms, contraction antonyms. Information about contraction in the free online English. — “contraction - definition of contraction by the Free Online”,
- In physics, thermal expansion is the tendency of matter to increase in volume or pressure when heated. A number of materials contract on heating within certain temperature ranges; we usually speak of negative thermal expansion, rather than thermal contraction, in such cases. — “”,
- Figure 1: A demonstration of the difference in force responses for between lengthening and non-lengthening active contractions (isometric vs. eccentric), and between active lengthening (eccentric) vs. non-active lengthening (passive stretch). Concentric Contractions—Muscle Actively Shortening. — “Muscle Physiology - Types of Contractions”, muscle.ucsd.edu
- Definition of contraction from Webster's New World College Dictionary. Meaning of contraction. Pronunciation of contraction. Definition of the word contraction. Origin of the word contraction. — “contraction - Definition of contraction at ”,
- The American 23rd Infantry Division is still unofficially named Americal, the name being a contraction of "America" and "New Caledonia" Our contraction of debt in this quarter has reduced our ability to attract investors. — “contraction - Wiktionary”,
- Definition of word from the Merriam-Webster Online Dictionary with audio pronunciations, thesaurus, Word of the Day, and word games. Definition of CONTRACTION. 1. a : the action or process of contracting : the state of being contracted b : the shortening and thickening of a functioning muscle or muscle. — “Contraction - Definition and More from the Free Merriam”, merriam-
- CONTRACTION is a Crossover Prog / Progressive Rock artist from Canada. This page includes CONTRACTION's : biography, official website, pictures, videos from YouTube, MP3 (free download, stream), related forum topics, news, tour dates and events. — “CONTRACTION music, discography, MP3, videos and reviews”,
- A contraction, by requirement, always uses fewer cells than the corresponding uncontracted form. Whole-word contractions need to be either memorized or looked up in a braille dictionary. — “BRAILLE CONTRACTIONS”,
- Isotonic contractions are those contractions in which muscles contract and shorten, causing movement of body part. Know more about isotonic contraction of muscles with this article. Isotonic Contraction. — “Isotonic Contraction”,
- For contraction in Ancient Greek, the coalescence of two vowels into one, see crasis. A contraction is the shortening of a word, syllable, or word group by omission of internal letters. In traditional grammar, contraction can denote the formation of a new word from one word or a group of. — “Contraction (grammar) - Wikipedia, the free encyclopedia”,
- contraction - definition of contraction - A period of general economic decline. Contractions are often part of a business cycle, coming after an expansionary phase and before a recession. — “contraction Definition”,
- Braxton Hicks contractions light, usually painless, irregular uterine contractions during pregnancy, gradually increasing in intensity and frequency and becoming more rhythmic during the third trimester. hourglass contraction contraction of an organ, as the stomach or uterus, at or near the middle. — “contraction - definition of contraction in the Medical”, medical-
related images for contraction
- Contraction Local jpg
- third rests while the fourth contracts The Creature spontaneously alternates between the third and fourth forms but evidently remains in those two forms indefinitely if left undisturbed Here is a frequency distribution of the interval between contractions during the first hour If left undisturbed for several hours it seems that the Creature dissipates its energy and
- Simplified view of the pharmacological action of L type Ca2+ channel blockers in cardiac myocytes In cardiac myocytes L type Ca2+
- cardiac contraction jpg
- Growth Contraction jpg
- Doweled Contraction Joint jpg file I dwg file I dxf file
- Contraction Activity A Kidspiration Creation
- Contraction Joint jpg file I dwg file I dxf file
- growth The full visualization is here you can play with the axes highlight your own country Ireland highlighted below flip the chart etc but for the overall story see below Openness Growth 2001 2009 A quick guide to how to read it
- y we start once more at x and link all nodes on the path directly to y This makes the individual finds twice as expensive but has a very positive impact on the structure of the tree The following is an easy way of performing union with path compression int find int x int y = x while a y >= 0 y = a y Now y is find x while a x
- HYDRAULIC CONTRACTION RING
- 血統 金融市場算帳 SDR或者FRN$和其它地方被作 因為 偉大信譽染上 已經開始 Carney和他的夥伴惡棍不會成功地請嘗試沿著流動性金字塔迫使資本 揭發 沒有在WFC BAC C 通用串行匯流排 總經理 GLD SLV GS JPM和CS催淚性毒氣中位置 Long物理黃金和銀
- restores the equilibrium of gauge boson exchange Once this occurs during acceleration equilibrium of gauge boson exchange to different directions is restored so no further drag occurs Above the flattening of a charge in the direction of its motion reduces drag instead of increasing it because the relative number of field lines is reduced in the direction of motion
- 7 contraction jpg
- n = 10 as shown in the pictures The idea of path contraction is that we invest something extra right now in order to exclude that in the future we have to walk this long way again There are alternative implementations of this idea We can also save the second run by keeping a trailer and just reducing the depth of the search path by a factor two This reduces the
- across the grain It expands negligibly in either the direction of the grain or radially This is why plainsawn lumber always expands more than quartersawn lumber of the same species Let s take the specific example of this draw bottom The depth of the drawer bottom is 10 1 2 I measured its moisture content MC to be 7 with my Delmhorst J 2000 meter The drawer is
- Maillage initial 1750 sommets 10197 arêtes 16008 faces et 7560 éléments
- Contraction velocity jpg
- QT MAC WMP PC We Hawkins dancers often but not always
- 좀 더 자세하게 설명을 해보겠습니다 물론 이 설명이 끝난다고 하더라도 이해가 된다는 보장은 없습니다 그래도 좀 도움은 되겠지요 일단은 근육의 수축하는 일련의 과정을 보았습니다 여기서 보면 알 수 있듯이 근육이 수축을 하려면 필요한 것이 있습니다 바로 신경 자극
- Contractions Babies first bath
- source programmes Pierre Perez source Physiologie du sport de l exercice Jack H Wilmore La vitesse de l influx nerveux dépend de deux paramètres
- Oooo A contraction
- aubrey contraction and con jpg
- The number 7 is the sum of 3+4 2+5 6+1 0+7 or the difference of 10 3 9 2 8 1 11 4 For example 7 Little Pigs Porky Rind Sweet Lips Twiggy Runt Bacon and Loin Please note that we have reached the limit of division with these particular persons one simply cannot divide any more either mentally or physically Contra Aristotle For him matter was
- Banach s fixed point thm gif
- Contraction Palpation
- and relax the worse it hurt What did help was sitting up and holding Tim as he stood next to the bed The degree to which having close contact with him reduced the pain was surprising having a contraction I think I asked Tim to call the midwife around 4 30 or 5 am Our instructions were to call her when the contractions were 5 minutes apart consistently long and
- Contraction hole png
- Quelle Horst de Marees Sportphysiologie Weitere Bildquellen finden sich hier und hier Weitere Infos Muskelkontraktion und
- Simplified view of the pharmacological action of L type Ca2+ channel blockers in arterial smooth muscle In contrast to cardiomyocytes action
- 一手抬起來放在後腦 吸氣 呼氣時手肘指向一邊腳踝的方向 身體往那邊稍微彎腰 同時收緊腹部肌肉 維持1至2秒 回到原來的姿勢 做另一邊 讓下手臂纖長的運動 Short Range Biceps Curl
- Pore Contraction Mask Pack jpg
- Contraction 1EN jpg
- Contraction Attention jpg
- contraction or a collapse of the possibilities as one is chosen in this reality the future defined Which leads us on to an enormous amount of new possible outcomes once again expansion Contraction 100cm by 100cm Acrylic on canvas 2007
- Map Copyright © 2007 All rights reserved by DNA Chip Research Inc
- the line past the bulbs If these lines do not pass on the OUTSIDE curve of the heel bulbs instead of cutting through them the hoof is contracted Sometimes a single heel bulb will be contracted while you will many cases where both are Sole contraction is always present in conjunction with other types
- Contraction Spine jpg
related videos for contraction
- Massive contractions at 39 weeks Massive contraction in the morning and then work in the afternoon.
- Jack Lalanne - Contraction Thanks
- Using Hypnosis & Hypnobabies During a Birthing Surge (Contraction) - Our homeschool blog http - Our health blog - Our family blog I'm using hypnobabies hypnosis techniques to stay completely comfortable during birthing. Also, my baby is posterior and I'm using the belly lift technique which reduced the intensity of my surges tremendously.
- Static Contraction - Anthony robbins - For more info on static contraction training .
- Contraction This is what a contraction looks like
- Length Contraction and Space-Time Concept Animation of Lorenz Transformation.
- Irish economy in sharp contraction - 26 Mar 09 Ireland, the first eurozone country to fall into an official recession, has released more gloomy financial data. Official statistics revealed that its gross domestic product shrank by 7.5 per cent in the three months to December, compared with the fourth quarter of 2007. Jonah Hull reports from Dublin
- Bio 335 Muscle Contraction Video.wmv Muscle contraction process song
- Static Contraction - Anthony Robbins part 2 For more info on Static Contraction Training.
- Muscle contraction muscle contraction video
- Post Modern Dance Techniques : How to Do Contractions & Extensions in Modern Dance Learn about contractions and extensions in modern dance in this free modern dance video. Expert: Benjamin Asriel Contact: Bio: Benjamin Asriel received his BA Music from Brown University. After graduating, he attended NYU's Tisch School of the Arts. He now dances with NYC choreographers and also presents his own work. Filmmaker: Paul Muller
- Sarcomere Contraction - Process Of Muscle Contraction With Myosin & Actin Thanks to McGraw Hill, you can watch and learn all about the process of muscle contraction with myosin and actin
- How to Know You're in Labor : Signs of Going Into Labor: Braxton Hicks Contractions Non-dilating contractions aren't going to get closer together. Learn about Braxton Hicks contractions and how to know if you're going into labor in thisfree video on pregnancy and childbirth. Expert: Lauren Ryan Contact: Bio: Lauren Ryan has been CSBE (Certified Supported Birth Educated) through Jana Warner, a Doula who she studied under in West Los Angeles. Filmmaker: Nili Nathan
- Trace Mayer's "The Great Credit Contraction" Trace Mayer shows that during a credit expansion capital moves into risky assets because they have the highest return. He also shows that during a credit contraction capital seeks the safest assets for protection. We are in the greatest credit contraction in the world's history... the base of the inverted pyramid of liquidity is "power money," gold and silver. Get a free sample with those cool charts from my video: Trace's site:
- contractions #1 so this is early in the day during a contraction---ooo the pain
- Inside Look: A Severe Global Economic Contraction Interview with Nouriel Roubini of NYU Stern School of Business
- When contractions ATTACK! A spicy contraction creeps up on Maureen as she files her nails
- Contractions - 6 cm Dilated At this point in labor, I received Stadol intravenously for pain.
- Making contractions with the verb "be" An English teacher shows how to make a contraction with a subject pronoun and the verb "be" in the present tense.
- Muscle Contraction Prentice Hall Presentation Pro video for Miller & Levine's Biology text
- The Great Credit Contraction by Trace Mayer JD The global economy is built on a derivative illusion. As the great credit contraction grinds on, the importance of performing accurate mental calculations of value will become more and more important. Every major country, including the United States, uses a fiat currency illusion as its legal tender. Even more troubling is that the worlds reserve currency—the FRN$—is a currency illusion. This system is evaporating before our very eyes. This book describes the background leading to this evaporation, which I call the Great Credit Contraction, sorts through complicated economic nomenclature, determines the root causes of the credit contraction evaporation, and suggests ways to maintain wealth during this global economic crisis. This book opens by discussing the development of money in the market. Understanding the historical landscape will provide the reader a perspective of where we currently are and what is likely to happen to the market in the future. To date, the development and rise of fractional reserve banking has perpetuated the inflationary credit expansion. During this process, fiat currency has risen to dominance with the culmination of the United States Federal Reserve Note Dollar (FRN$) as the worlds reserve currency. In summary, this book is an autopsy of the current worldwide monetary and financial system beginning with a brief overview of financial history, the current great deflationary credit contraction, and projecting the future ...
- How a muscle contraction is signalled - Animation Impulse to activate action potential in skeletal muscle, My notes.... VIII.Muscle Fibers a.specific structures in muscle cells (muscle fibers)(skeletal) allow the cells to contract and relax i.myofibrils (fills nearly all cytoplasm) 1.cylindrical structures that make up bulk of cytoplasm 2.consist of a chain of small, contractile units (sarcomeres) a.give muscle fiber striped appearance posed of myosin-II and actin i.myosin-II = thick center ii.actin = thin filaments that overlap mysoin-II 1.attach plus end to Z disc a.intersection of 2 sarcomeres b.Contraction i.when sarcomeres shorten, muscle fibers contract 1.heads of myosin filaments start walking along respective actin filaments a.pulls actin and myosin past each other b.occurs very quickly (less than 1/10 of a second) i.each myosin filament has 300 heads pulling on the actin ii.muscle fibers relax when myosin heads release the actin filaments iii.steps 1.attached (myosin attached to actin)(post stroke) a.rigor state b.ATP binds causes release 2.released a.ATP hydrolysis to ADP + Pi allows ***ed state 3.***ed a.Pi release is coupled with binding of myosin head to actin 4.force-generating(POWER STROKE) a.actin moves b.then ADP is released c.ATP binds 5.detached a.caused by ATP binding 6.ATP is hydrolysed to ADP +Pi a.Cause ***ed state iv.Steps (alternate) 1.Myosin starts attached to actin (just after the last stroke)("rigor" state). 2.ATP binds to myosin head causing a release from actin. 3.Then ATP hydrolysis ...
- How To Form Contractions in English- a grammar lesson This video lesson discusses how to form contractions.
- !!!Skeletal Muscle Contraction!!! (skeletal muscle physiology) SCIENCE EXPERIMENT !!!Skeletal Muscle Contraction!!! (skeletal muscle physiology) SCIENCE EXPERIMENT Crazy Chris shows you how to put your muscles to the test with a nifty science trick! Objective: To understand how our muscles move by contraction only. Materials Needed: - Your Arm - Somthing To...
- Action potential action potenial
- Contractions...waiting for Madeline I was determined to labor naturally with my first born child and labor I did. This is no drugs, real life. Based on the date/time stamp of the video, I had been in labor for 10 hours at this point. It's amazing to me to see how I handled the contractions. I still felt great and preferred to be left alone, which my hospital and doctors/midwife respectfully honored. As the day turned to night and day again, I didn't fare so well, but we did welcome our perfect baby in her own perfect way. After 23+ hours of laboring without any drugs (I was 100% effaced, Madeline was only at 0 station and I was 8 cm dilated), I got the epidural. at 27 hours, Madeline came out via c-section. She was 8 lbs 15 oz, and 22 inches long. And perfect in every way.
- In Contraction Kathleen In The Middle of a Contraction
- Alicia Having a Contraction with Hypnobabies Our first child was born at home, posterior and acyn***ic (she came sunnyside up and head ***ed to the side---usually results in long labors and c-section in hospital). Thanks to Hypnobabies (and lots of practice before hand) the birth was easy and fast. Alicia is getting close to transition in this video but you can see how calm she feels and how focused she is...she's just riding the birthing wave (contraction) out. The whole birth was like this.
- Meteorological Radar Systems This animation shows the working method of a meteorological weather observation radar that is a remote sensing system.
- Feex Contraction at Côte d'Opale Some sessions of speedflying at cote d'opale with the Aerodyne Feex Wind on take off was arround 30mph ( 45/50kmh) More informations on opale-
- Eureka! Episode 19 - Expansion and Contraction Using balloons to illustrate the process, Eureka! shows how, when matter gets hot, its molecules go faster and the solid, liquid, or gas expands. Conversely, when matter gets cold, its molecules go slower, and the solid, liquid, or gas contracts
- Muscle Structure and Function An HD version is coming soon! Visit for more! And here's the last animation done during high school, also the first 'real' one for biology. It explains the details of animal muscles and is actually very useful for anyone studying the subject. So useful, that it I should sell it to some educational or biological company, as my biology teacher said. Anyway, this one was, again, done late at night, so you will surely find a few glitches if you look for them. And yes, I know the voicing is bad and needs to be redone, but 15 minutes before school, after a night of almost no sleep, that was the best I could do. For all you biology students out there, I hope you find this useful ;) Start time: 05/27/08 15:28 Finish time: 06/13/08 11:43 Work time: around 10 hrs.
- Contractions 101 Teaching my wife how to work through a contraction and save me a false alarm ride to the ER.
- The Great Credit Contraction eBook The global economy is built on a derivative illusion. As the great credit contraction grinds on, the importance of performing accurate mental calculations of value will become more and more important. Every major country, including the United States, uses a fiat currency illusion as its legal tender. Even more troubling is that the worlds reserve currency—the FRN$—is a currency illusion. This system is evaporating before our very eyes. This book describes the background leading to this evaporation, which I call the Great Credit Contraction, sorts through complicated economic nomenclature, determines the root causes of the credit contraction evaporation, and suggests ways to maintain wealth during this global economic crisis. This book opens by discussing the development of money in the market. Understanding the historical landscape will provide the reader a perspective of where we currently are and what is likely to happen to the market in the future. To date, the development and rise of fractional reserve banking has perpetuated the inflationary credit expansion. During this process, fiat currency has risen to dominance with the culmination of the United States Federal Reserve Note Dollar (FRN$) as the worlds reserve currency. In summary, this book is an autopsy of the current worldwide monetary and financial system beginning with a brief overview of financial history, the current great deflationary credit contraction, and projecting the future ...
- Expansion and Contraction - Graduate Level Version - Part 1 ~ Shinzen Young Shinzen talks about Zero and One, the relationship between Kenotic Christianity and the emptying out of shuniya, and translating for his Zen teacher Joshu Sasaki Roshi. Filmed in Santa Barbara in Jan. 2009.
- Contractions: American English Pronunciation TRANSCRIPT:
- No Weight "At Home" Contraction Workout! (0:27)- Routine Sets & Reps (1:29)- Biceps (2:53)- Triceps (4:22)- Forearms (5:19)- Squat (6:04)- Hamstrings (7:07)- Quadriceps (8:04)- Glutes (8:47)- Calves (9:28)- Back (10:49)- Chest (12:23)- Shoulders (14:30)- Abs FREE iPhone App! iPhone App! Bio-Engineered Supplements & Nutrition BSN Facebook: Check out my Meal Plan!: TRX Follow me on Twitter! Check out for more information and detailed exercises!
- 1036- Muscle Contraction 1036- Muscle Contraction Bio II Project Fastforward through the worthless first part.
- Breathing During Cotractions Kim Nelli Kim Nelli breathing through her contraction during her natura water birth. It sounded so nice but did not end with an orgasmic birth. Hopefully next time!
twitter about contraction
Blogs & Forum
blogs and forums about contraction
“Daily currency exchange news and ***ysis from the experts at World First. Up to date exchange rates and breaking news from the foreign exchange markets”
— Contraction | Foreign Exchange and Currency Blog - World First,
“Equestrian news and information. heel contraction (3/4) - Discussion Forums - Barefoot Forum - Horsetalk Forum”
— heel contraction - Horsetalk, horsetalk.co.nz
“From both sides of the fence - credit demanded and credit extended - we are witnessing a contraction. This is NOT a slowdown; this is not a case of slower”
— HS Dent | The Slow Contraction of Credit Continues,
“Out of Control Policy Blog. 2010 Contraction Worse than 2008. Anthony Randazzo. November 1, 2010, 9:16am "Contraction Watch" tells us that the pain is not about to end anytime soon: See here for the whole blog post,”
— Reason Foundation - Out of Control Policy Blog > 2010,
“StrategyStreet / Blog / Industry Contraction. Left menu. User Login Industry Contraction Exposes Potential Low Price Points. The legal industry is suffering”
— Industry Contraction / Blog / StrategyStreet - StrategyStreet,
“Myosin is a molecular motor that acts like an active ratchet. Chains of actin proteins form high tensile passive 'thin' filaments that transmit the force generated by myosin to the ends of the muscle. Myosin also forms 'thick' filaments”
— muscle contraction | BioTecNika,
“In a special to , Jeremy Plonk points out the need for contraction in racing, a need that is already being fulfilled to some degree with The writer here fails to understand that contraction has also taken place in attendance”
— Paulick Report " Blog Archive " CONTRACTION,
“Catfood Software Blog. Catfood Contraction Timer for Android. In honor of the imminent spawning of my own new process I've published a contraction timer for Android. After looking at the available options I found everything to be too basic, or too complicated”
— Catfood Software Blog | Catfood Contraction Timer for Android,
“The blogosphere's dictionary The word blog is a contraction of the words web log. As technology improves and time goes by, the word blog is coming to mean different things to”
— Blog - Blogossary, | http://wordsdomination.com/contraction.html | 13 |
17 | This lesson plan allows students to work as different parts of Congress to balance part of the budget. They work through the process of balancing the budget from resolutions to appropriations. Ultimately, they must compare their version of the budget to the President's proposal and decide whether he would sign or veto their bills!
Budgeting; appropriations; resolutions; federal budget; congressional appropriations
In this simulation-style lesson plan, students learn how to develop a personal budget. Students select careers, homes, cars, family size, and other lifestyle choices and then develop a workable budget considering those criteria. They develop an understanding of a realistic budget and the difference between wants and needs.
Budgeting; finance; budget; future; needs; wants; personal finance; money management
Students learn about how the federal government manages money. They develop an understanding of the fundamentals of federal budgeting, including revenue streams, budgetary choices, and their consequences. This lesson includes both a PowerPoint and paper option.
This lesson shows students how the government gets its budget. It leads students through each step of the legislative process that results in a federal budget. Through a PowerPoint presentation or paper reading option, students learn about the compromises and choices that go into creating a budget to fund our government!
Students explore the many roles filled by their county government and the role of county governments in a federalist system. After a close examination of the county, students create their own fictional county! Students are familiarized with fun facts about county government and analyze the transition of county development through the lense of westward expansion.
County; local government; federalism; budgeting; legislation;
Constitution Day; U.S. Constitution; Separation of Powers; Checks and Balances; Three Branches; Article I; Article II; Article III; Executive Branch; Legislative Branch; Judicial Branch; Congress; Structure of Government
Take a peek into the electoral process- from party primaries to the general election. Students will learn the distinctions between the popular vote and the Electoral College, and exercise their critical reasoning skills to analyze the differences between the presidential and congressional elections. Students will also contrast the various nomination processes and learn about the role of party conventions in American politics.
Primary Election; General Election; Caucus; Electoral College; Electors; Popular Vote; Party Conventions; Nomination; Nominee; Campaign; Acceptance Speech; Delegate; Absolute Majority
The Civil War and Reconstruction Era brought about the end of slavery and the expansion of civil rights to African Americans through the 13th, 14th, and 15th Amendments. Compare the Northern and Southern states, discover the concepts of due process and equal protection, and understand how the former Confederate states reacted to the Reconstruction Amendments.
Use primary documents and images to discover the ways state and local governments restricted the newly gained freedoms of African Americans after the Civil War. Compare, contrast, and analyze post-war legislation, court decisions (including Plessy v. Ferguson), and a political cartoon by Thomas Nast to understand life in Jim Crow states.
Jim Crow Laws; Black Codes; Segregation; Miscegenation; Public Accommodation; Ku Klux Klan; Plessy v. Ferguson; Separate But Equal; Poll Taxes; Poll Tests; White Primary; Grandfather Clause; Resistance
Discover the people, groups, and events behind the Civil Rights Movement. Learn about means of non-violent protest, opposition to the movement, and identify how it took all three branches of the federal government to effect change. Protest posters, fictional diary entries, and a map of the movement's major events develop a greater understanding of the struggle for civil rights.
Civil Rights Movement; 1950s; 1960s; Discrimination; Segregation; Activism; Protests; Boycotts; Non-violence; Marches; Integration; Brown v. Board of Education; Civil Rights Act of 1964; 24th Amendment; Voting Rights Act; Prejudice; Thurgood Marshall; Rosa Parks; Martin Luther King Jr.; Malcolm X; Lyndon B. Johnson; NAACP; SNCC; SCLC | http://www.icivics.org/teachers/lesson-plans?page=2 | 13 |
74 | Analysis of variance
Analysis of variance (ANOVA) is a collection of statistical models used to analyze the differences between group means and their associated procedures (such as "variation" among and between groups). In ANOVA setting, the observed variance in a particular variable is partitioned into components attributable to different sources of variation. In its simplest form, ANOVA provides a statistical test of whether or not the means of several groups are equal, and therefore generalizes t-test to more than two groups. Doing multiple two-sample t-tests would result in an increased chance of committing a type I error. For this reason, ANOVAs are useful in comparing (testing) three or more means (groups or variables) for statistical significance.
Background and terminology
ANOVA is a particular form of statistical hypothesis testing heavily used in the analysis of experimental data. A statistical hypothesis test is a method of making decisions using data. A test result (calculated from the null hypothesis and the sample) is called statistically significant if it is deemed unlikely to have occurred by chance, assuming the truth of the null hypothesis. A statistically significant result (when a probability (p-value) is less than a threshold (significance level)) justifies the rejection of the null hypothesis.
In the typical application of ANOVA, the null hypothesis is that all groups are simply random samples of the same population. This implies that all treatments have the same effect (perhaps none). Rejecting the null hypothesis implies that different treatments result in altered effects.
By construction, hypothesis testing limits the rate of Type I errors (false positives leading to false scientific claims) to a significance level. Experimenters also wish to limit Type II errors (false negatives resulting in missed scientific discoveries). The Type II error rate is a function of several things including sample size (positively correlated with experiment cost), significance level (when the standard of proof is high, the chances of overlooking a discovery are also high) and effect size (when the effect is obvious to the casual observer, Type II error rates are low).
The terminology of ANOVA is largely from the statistical design of experiments. The experimenter adjusts factors and measures responses in an attempt to determine an effect. Factors are assigned to experimental units by a combination of randomization and blocking to ensure the validity of the results. Blinding keeps the weighing impartial. Responses show a variability that is partially the result of the effect and is partially random error.
ANOVA is the synthesis of several ideas and it is used for multiple purposes. As a consequence, it is difficult to define concisely or precisely.
"Classical ANOVA for balanced data does three things at once:
- As exploratory data analysis, an ANOVA is an organization of an additive data decomposition, and its sums of squares indicate the variance of each component of the decomposition (or, equivalently, each set of terms of a linear model).
- Comparisons of mean squares, along with F-tests ... allow testing of a nested sequence of models.
- Closely related to the ANOVA is a linear model fit with coefficient estimates and standard errors."
In short, ANOVA is a statistical tool used in several ways to develop and confirm an explanation for the observed data.
- It is computationally elegant and relatively robust against violations to its assumptions.
- ANOVA provides industrial strength (multiple sample comparison) statistical analysis.
- It has been adapted to the analysis of a variety of experimental designs.
As a result: ANOVA "has long enjoyed the status of being the most used (some would say abused) statistical technique in psychological research." ANOVA "is probably the most useful technique in the field of statistical inference."
ANOVA is difficult to teach, particularly for complex experiments, with split-plot designs being notorious. In some cases the proper application of the method is best determined by problem pattern recognition followed by the consultation of a classic authoritative test.
(Condensed from the NIST Engineering Statistics handbook: Section 5.7. A Glossary of DOE Terminology.)
- Balanced design
- An experimental design where all cells (i.e. treatment combinations) have the same number of observations.
- A schedule for conducting treatment combinations in an experimental study such that any effects on the experimental results due to a known change in raw materials, operators, machines, etc., become concentrated in the levels of the blocking variable. The reason for blocking is to isolate a systematic effect and prevent it from obscuring the main effects. Blocking is achieved by restricting randomization.
- A set of experimental runs which allows the fit of a particular model and the estimate of effects.
- Design of experiments. An approach to problem solving involving collection of data that will support valid, defensible, and supportable conclusions.
- How changing the settings of a factor changes the response. The effect of a single factor is also called a main effect.
- Unexplained variation in a collection of observations. DOE's typically require understanding of both random error and lack of fit error.
- Experimental unit
- The entity to which a specific treatment combination is applied.
- Process inputs an investigator manipulates to cause a change in the output.
- Lack-of-fit error
- Error that occurs when the analysis omits one or more important terms or factors from the process model. Including replication in a DOE allows separation of experimental error into its components: lack of fit and random (pure) error.
- Mathematical relationship which relates changes in a given response to changes in one or more factors.
- Random error
- Error that occurs due to natural variation in the process. Random error is typically assumed to be normally distributed with zero mean and a constant variance. Random error is also called experimental error.
- A schedule for allocating treatment material and for conducting treatment combinations in a DOE such that the conditions in one run neither depend on the conditions of the previous run nor predict the conditions in the subsequent runs.[nb 1]
- Performing the same treatment combination more than once. Including replication allows an estimate of the random error independent of any lack of fit error.
- The output(s) of a process. Sometimes called dependent variable(s).
- A treatment is a specific combination of factor levels whose effect is to be compared with other treatments.
Classes of models
There are three classes of models used in the analysis of variance, and these are outlined here.
The fixed-effects model of analysis of variance applies to situations in which the experimenter applies one or more treatments to the subjects of the experiment to see if the response variable values change. This allows the experimenter to estimate the ranges of response variable values that the treatment would generate in the population as a whole.
Random effects models are used when the treatments are not fixed. This occurs when the various factor levels are sampled from a larger population. Because the levels themselves are random variables, some assumptions and the method of contrasting the treatments (a multi-variable generalization of simple differences) differ from the fixed-effects model.
A mixed-effects model contains experimental factors of both fixed and random-effects types, with appropriately different interpretations and analysis for the two types.
Example: Teaching experiments could be performed by a university department to find a good introductory textbook, with each text considered a treatment. The fixed-effects model would compare a list of candidate texts. The random-effects model would determine whether important differences exist among a list of randomly selected texts. The mixed-effects model would compare the (fixed) incumbent texts to randomly selected alternatives.
Defining fixed and random effects has proven elusive, with competing definitions arguably leading toward a linguistic quagmire.
Assumptions of ANOVA
The analysis of variance has been studied from several approaches, the most common of which uses a linear model that relates the response to the treatments and blocks. Even when the statistical model is nonlinear, it can be approximated by a linear model for which an analysis of variance may be appropriate.
Textbook analysis using a normal distribution
- Independence of observations – this is an assumption of the model that simplifies the statistical analysis.
- Normality – the distributions of the residuals are normal.
- Equality (or "homogeneity") of variances, called homoscedasticity — the variance of data in groups should be the same.
The separate assumptions of the textbook model imply that the errors are independently, identically, and normally distributed for fixed effects models, that is, that the errors ('s) are independent and
In a randomized controlled experiment, the treatments are randomly assigned to experimental units, following the experimental protocol. This randomization is objective and declared before the experiment is carried out. The objective random-assignment is used to test the significance of the null hypothesis, following the ideas of C. S. Peirce and Ronald A. Fisher. This design-based analysis was discussed and developed by Francis J. Anscombe at Rothamsted Experimental Station and by Oscar Kempthorne at Iowa State University. Kempthorne and his students make an assumption of unit treatment additivity, which is discussed in the books of Kempthorne and David R. Cox.
In its simplest form, the assumption of unit-treatment additivity[nb 2] states that the observed response from experimental unit when receiving treatment can be written as the sum of the unit's response and the treatment-effect , that is
The assumption of unit-treatment addivity implies that, for every treatment , the th treatment have exactly the same effect on every experiment unit.
The assumption of unit treatment additivity usually cannot be directly falsified, according to Cox and Kempthorne. However, many consequences of treatment-unit additivity can be falsified. For a randomized experiment, the assumption of unit-treatment additivity implies that the variance is constant for all treatments. Therefore, by contraposition, a necessary condition for unit-treatment additivity is that the variance is constant.
The use of unit treatment additivity and randomization is similar to the design-based inference that is standard in finite-population survey sampling.
Derived linear model
Kempthorne uses the randomization-distribution and the assumption of unit treatment additivity to produce a derived linear model, very similar to the textbook model discussed previously. The test statistics of this derived linear model are closely approximated by the test statistics of an appropriate normal linear model, according to approximation theorems and simulation studies. However, there are differences. For example, the randomization-based analysis results in a small but (strictly) negative correlation between the observations. In the randomization-based analysis, there is no assumption of a normal distribution and certainly no assumption of independence. On the contrary, the observations are dependent!
The randomization-based analysis has the disadvantage that its exposition involves tedious algebra and extensive time. Since the randomization-based analysis is complicated and is closely approximated by the approach using a normal linear model, most teachers emphasize the normal linear model approach. Few statisticians object to model-based analysis of balanced randomized experiments.
Statistical models for observational data
However, when applied to data from non-randomized experiments or observational studies, model-based analysis lacks the warrant of randomization. For observational data, the derivation of confidence intervals must use subjective models, as emphasized by Ronald A. Fisher and his followers. In practice, the estimates of treatment-effects from observational studies generally are often inconsistent. In practice, "statistical models" and observational data are useful for suggesting hypotheses that should be treated very cautiously by the public.
Summary of assumptions
The normal-model based ANOVA analysis assumes the independence, normality and homogeneity of the variances of the residuals. The randomization-based analysis assumes only the homogeneity of the variances of the residuals (as a consequence of unit-treatment additivity) and uses the randomization procedure of the experiment. Both these analyses require homoscedasticity, as an assumption for the normal-model analysis and as a consequence of randomization and additivity for the randomization-based analysis.
However, studies of processes that change variances rather than means (called dispersion effects) have been successfully conducted using ANOVA. There are no necessary assumptions for ANOVA in its full generality, but the F-test used for ANOVA hypothesis testing has assumptions and practical limitations which are of continuing interest.
Problems which do not satisfy the assumptions of ANOVA can often be transformed to satisfy the assumptions. The property of unit-treatment additivity is not invariant under a "change of scale", so statisticians often use transformations to achieve unit-treatment additivity. If the response variable is expected to follow a parametric family of probability distributions, then the statistician may specify (in the protocol for the experiment or observational study) that the responses be transformed to stabilize the variance. Also, a statistician may specify that logarithmic transforms be applied to the responses, which are believed to follow a multiplicative model. According to Cauchy's functional equation theorem, the logarithm is the only continuous transformation that transforms real multiplication to addition.
Characteristics of ANOVA
ANOVA is used in the analysis of comparative experiments, those in which only the difference in outcomes is of interest. The statistical significance of the experiment is determined by a ratio of two variances. This ratio is independent of several possible alterations to the experimental observations: Adding a constant to all observations does not alter significance. Multiplying all observations by a constant does not alter significance. So ANOVA statistical significance results are independent of constant bias and scaling errors as well as the units used in expressing observations. In the era of mechanical calculation it was common to subtract a constant from all observations (when equivalent to dropping leading digits) to simplify data entry. This is an example of data coding.
Logic of ANOVA
The calculations of ANOVA can be characterized as computing a number of means and variances, dividing two variances and comparing the ratio to a handbook value to determine statistical significance. Calculating a treatment effect is then trivial, "the effect of any treatment is estimated by taking the difference between the mean of the observations which receive the treatment and the general mean."
Partitioning of the sum of squares
ANOVA uses traditional standardized terminology. The definitional equation of sample variance is , where the divisor is called the degrees of freedom (DF), the summation is called the sum of squares (SS), the result is called the mean square (MS) and the squared terms are deviations from the sample mean. ANOVA estimates 3 sample variances: a total variance based on all the observation deviations from the grand mean, an error variance based on all the observation deviations from their appropriate treatment means and a treatment variance. The treatment variance is based on the deviations of treatment means from the grand mean, the result being multiplied by the number of observations in each treatment to account for the difference between the variance of observations and the variance of means. If the null hypothesis is true, all three variance estimates are equal (within sampling error).
The fundamental technique is a partitioning of the total sum of squares SS into components related to the effects used in the model. For example, the model for a simplified ANOVA with one type of treatment at different levels.
The number of degrees of freedom DF can be partitioned in a similar way: one of these components (that for error) specifies a chi-squared distribution which describes the associated sum of squares, while the same is true for "treatments" if there is no treatment effect.
See also Lack-of-fit sum of squares.
The F-test is used for comparing the factors of the total deviation. For example, in one-way, or single-factor ANOVA, statistical significance is tested for by comparing the F test statistic
where MS is mean square, = number of treatments and = total number of cases
to the F-distribution with , degrees of freedom. Using the F-distribution is a natural candidate because the test statistic is the ratio of two scaled sums of squares each of which follows a scaled chi-squared distribution.
The expected value of F is (where n is the treatment sample size) which is 1 for no treatment effect. As values of F increase above 1 the evidence is increasingly inconsistent with the null hypothesis. Two apparent experimental methods of increasing F are increasing the sample size and reducing the error variance by tight experimental controls.
The textbook method of concluding the hypothesis test is to compare the observed value of F with the critical value of F determined from tables. The critical value of F is a function of the numerator degrees of freedom, the denominator degrees of freedom and the significance level (α). If F ≥ FCritical (Numerator DF, Denominator DF, α) then reject the null hypothesis.
The computer method calculates the probability (p-value) of a value of F greater than or equal to the observed value. The null hypothesis is rejected if this probability is less than or equal to the significance level (α). The two methods produce the same result.
The ANOVA F-test is known to be nearly optimal in the sense of minimizing false negative errors for a fixed rate of false positive errors (maximizing power for a fixed significance level). To test the hypothesis that all treatments have exactly the same effect, the F-test's p-values closely approximate the permutation test's p-values: The approximation is particularly close when the design is balanced. Such permutation tests characterize tests with maximum power against all alternative hypotheses, as observed by Rosenbaum.[nb 3] The ANOVA F–test (of the null-hypothesis that all treatments have exactly the same effect) is recommended as a practical test, because of its robustness against many alternative distributions.[nb 4]
ANOVA consists of separable parts; partitioning sources of variance and hypothesis testing can be used individually. ANOVA is used to support other statistical tools. Regression is first used to fit more complex models to data, then ANOVA is used to compare models with the objective of selecting simple(r) models that adequately describe the data. "Such models could be fit without any reference to ANOVA, but ANOVA tools could then be used to make some sense of the fitted models, and to test hypotheses about batches of coefficients." "[W]e think of the analysis of variance as a way of understanding and structuring multilevel models—not as an alternative to regression but as a tool for summarizing complex high-dimensional inferences ..."
ANOVA for a single factor
The simplest experiment suitable for ANOVA analysis is the completely randomized experiment with a single factor. More complex experiments with a single factor involve constraints on randomization and include completely randomized blocks and Latin squares (and variants: Graeco-Latin squares, etc.). The more complex experiments share many of the complexities of multiple factors. A relatively complete discussion of the analysis (models, data summaries, ANOVA table) of the completely randomized experiment is available.
ANOVA for multiple factors
ANOVA generalizes to the study of the effects of multiple factors. When the experiment includes observations at all combinations of levels of each factor, it is termed factorial. Factorial experiments are more efficient than a series of single factor experiments and the efficiency grows as the number of factors increases. Consequently, factorial designs are heavily used.
The use of ANOVA to study the effects of multiple factors has a complication. In a 3-way ANOVA with factors x, y and z, the ANOVA model includes terms for the main effects (x, y, z) and terms for interactions (xy, xz, yz, xyz). All terms require hypothesis tests. The proliferation of interaction terms increases the risk that some hypothesis test will produce a false positive by chance. Fortunately, experience says that high order interactions are rare. The ability to detect interactions is a major advantage of multiple factor ANOVA. Testing one factor at a time hides interactions, but produces apparently inconsistent experimental results.
Caution is advised when encountering interactions; Test interaction terms first and expand the analysis beyond ANOVA if interactions are found. Texts vary in their recommendations regarding the continuation of the ANOVA procedure after encountering an interaction. Interactions complicate the interpretation of experimental data. Neither the calculations of significance nor the estimated treatment effects can be taken at face value. "A significant interaction will often mask the significance of main effects." Graphical methods are recommended to enhance understanding. Regression is often useful. A lengthy discussion of interactions is available in Cox (1958). Some interactions can be removed (by transformations) while others cannot.
A variety of techniques are used with multiple factor ANOVA to reduce expense. One technique used in factorial designs is to minimize replication (possibly no replication with support of analytical trickery) and to combine groups when effects are found to be statistically (or practically) insignificant. An experiment with many insignificant factors may collapse into one with a few factors supported by many replications.
Worked numeric examples
Some analysis is required in support of the design of the experiment while other analysis is performed after changes in the factors are formally found to produce statistically significant changes in the responses. Because experimentation is iterative, the results of one experiment alter plans for following experiments.
The number of experimental units
In the design of an experiment, the number of experimental units is planned to satisfy the goals of the experiment. Experimentation is often sequential.
Early experiments are often designed to provide mean-unbiased estimates of treatment effects and of experimental error. Later experiments are often designed to test a hypothesis that a treatment effect has an important magnitude; in this case, the number of experimental units is chosen so that the experiment is within budget and has adequate power, among other goals.
Reporting sample size analysis is generally required in psychology. "Provide information on sample size and the process that led to sample size decisions." The analysis, which is written in the experimental protocol before the experiment is conducted, is examined in grant applications and administrative review boards.
Besides the power analysis, there are less formal methods for selecting the number of experimental units. These include graphical methods based on limiting the probability of false negative errors, graphical methods based on an expected variation increase (above the residuals) and methods based on achieving a desired confident interval.
Power analysis is often applied in the context of ANOVA in order to assess the probability of successfully rejecting the null hypothesis if we assume a certain ANOVA design, effect size in the population, sample size and significance level. Power analysis can assist in study design by determining what sample size would be required in order to have a reasonable chance of rejecting the null hypothesis when the alternative hypothesis is true.
Several standardized measures of effect gauge the strength of the association between a predictor (or set of predictors) and the dependent variable. Effect-size estimates facilitate the comparison of findings in studies and across disciplines. A non-standardized measure of effect size with meaningful units may be preferred for reporting purposes.
η2 ( eta-squared ): Eta-squared describes the ratio of variance explained in the dependent variable by a predictor while controlling for other predictors. Eta-squared is a biased estimator of the variance explained by the model in the population (it estimates only the effect size in the sample). On average it overestimates the variance explained in the population. As the sample size gets larger the amount of bias gets smaller,
Cohen (1992) suggests effect sizes for various indexes, including ƒ (where 0.1 is a small effect, 0.25 is a medium effect and 0.4 is a large effect). He also offers a conversion table (see Cohen, 1988, p. 283) for eta squared (η2) where 0.0099 constitutes a small effect, 0.0588 a medium effect and 0.1379 a large effect.
It is always appropriate to carefully consider outliers. They have a disproportionate impact on statistical conclusions and are often the result of errors.
It is prudent to verify that the assumptions of ANOVA have been met. Residuals are examined or analyzed to confirm homoscedasticity and gross normality. Residuals should have the appearance of (zero mean normal distribution) noise when plotted as a function of anything including time and modeled data values. Trends hint at interactions among factors or among observations. One rule of thumb: "If the largest standard deviation is less than twice the smallest standard deviation, we can use methods based on the assumption of equal standard deviations and our results will still be approximately correct."
A statistically significant effect in ANOVA is often followed up with one or more different follow-up tests. This can be done in order to assess which groups are different from which other groups or to test various other focused hypotheses. Follow-up tests are often distinguished in terms of whether they are planned (a priori) or post hoc. Planned tests are determined before looking at the data and post hoc tests are performed after looking at the data.
Often one of the "treatments" is none, so the treatment group can act as a control. Dunnett's test (a modification of the t-test) tests whether each of the other treatment groups has the same mean as the control.
Post hoc tests such as Tukey's range test most commonly compare every group mean with every other group mean and typically incorporate some method of controlling for Type I errors. Comparisons, which are most commonly planned, can be either simple or compound. Simple comparisons compare one group mean with one other group mean. Compound comparisons typically compare two sets of groups means where one set has two or more groups (e.g., compare average group means of group A, B and C with group D). Comparisons can also look at tests of trend, such as linear and quadratic relationships, when the independent variable involves ordered levels.
Following ANOVA with pair-wise multiple-comparison tests has been criticized on several grounds. There are many such tests (10 in one table) and recommendations regarding their use are vague or conflicting.
Study designs and ANOVAs
There are several types of ANOVA. Many statisticians base ANOVA on the design of the experiment, especially on the protocol that specifies the random assignment of treatments to subjects; the protocol's description of the assignment mechanism should include a specification of the structure of the treatments and of any blocking. It is also common to apply ANOVA to observational data using an appropriate statistical model.
Some popular designs use the following types of ANOVA:
- One-way ANOVA is used to test for differences among two or more independent groups (means),e.g. different levels of urea application in a crop. Typically, however, the one-way ANOVA is used to test for differences among at least three groups, since the two-group case can be covered by a t-test. When there are only two means to compare, the t-test and the ANOVA F-test are equivalent; the relation between ANOVA and t is given by F = t2.
- Factorial ANOVA is used when the experimenter wants to study the interaction effects among the treatments.
- Repeated measures ANOVA is used when the same subjects are used for each treatment (e.g., in a longitudinal study).
- Multivariate analysis of variance (MANOVA) is used when there is more than one response variable.
Balanced experiments (those with an equal sample size for each treatment) are relatively easy to interpret; Unbalanced experiments offer more complexity. For single factor (one way) ANOVA, the adjustment for unbalanced data is easy, but the unbalanced analysis lacks both robustness and power. For more complex designs the lack of balance leads to further complications. "The orthogonality property of main effects and interactions present in balanced data does not carry over to the unbalanced case. This means that the usual analysis of variance techniques do not apply. Consequently, the analysis of unbalanced factorials is much more difficult than that for balanced designs." In the general case, "The analysis of variance can also be applied to unbalanced data, but then the sums of squares, mean squares, and F-ratios will depend on the order in which the sources of variation are considered." The simplest techniques for handling unbalanced data restore balance by either throwing out data or by synthesizing missing data. More complex techniques use regression.
ANOVA is (in part) a significance test. The American Psychological Association holds the view that simply reporting significance is insufficient and that reporting confidence bounds is preferred.
ANOVA is considered to be a special case of linear regression which in turn is a special case of the general linear model. All consider the observations to be the sum of a model (fit) and a residual (error) to be minimized.
While the analysis of variance reached fruition in the 20th century, antecedents extend centuries into the past according to Stigler. These include hypothesis testing, the partitioning of sums of squares, experimental techniques and the additive model. Laplace was performing hypothesis testing in the 1770s. The development of least-squares methods by Laplace and Gauss circa 1800 provided an improved method of combining observations (over the existing practices of astronomy and geodesy). It also initiated much study of the contributions to sums of squares. Laplace soon knew how to estimate a variance from a residual (rather than a total) sum of squares. By 1827 Laplace was using least squares methods to address ANOVA problems regarding measurements of atmospheric tides. Before 1800 astronomers had isolated observational errors resulting from reaction times (the "personal equation") and had developed methods of reducing the errors. The experimental methods used in the study of the personal equation were later accepted by the emerging field of psychology which developed strong (full factorial) experimental methods to which randomization and blinding were soon added. An eloquent non-mathematical explanation of the additive effects model was available in 1885.
Sir Ronald Fisher introduced the term "variance" and proposed a formal analysis of variance in a 1918 article The Correlation Between Relatives on the Supposition of Mendelian Inheritance. His first application of the analysis of variance was published in 1921. Analysis of variance became widely known after being included in Fisher's 1925 book Statistical Methods for Research Workers.
One of the attributes of ANOVA which ensured its early popularity was computational elegance. The structure of the additive model allows solution for the additive coefficients by simple algebra rather than by matrix calculations. In the era of mechanical calculators this simplicity was critical. The determination of statistical significance also required access to tables of the F function which were supplied by early statistics texts.
|Wikimedia Commons has media related to: Analysis of variance|
- Randomization is a term used in multiple ways in this material. "Randomization has three roles in applications: as a device for eliminating biases, for example from unobserved explanatory variables and selection effects: as a basis for estimating standard errors: and as a foundation for formally exact significance tests." Cox (2006, page 192) Hinkelmann and Kempthorne use randomization both in experimental design and for statistical analysis.
- Unit-treatment additivity is simply termed additivity in most texts. Hinkelmann and Kempthorne add adjectives and distinguish between additivity in the strict and broad senses. This allows a detailed consideration of multiple error sources (treatment, state, selection, measurement and sampling) on page 161.
- Rosenbaum (2002, page 40) cites Section 5.7 (Permutation Tests), Theorem 2.3 (actually Theorem 3, page 184) of Lehmann's Testing Statistical Hypotheses (1959).
- The F-test for the comparison of variances has a mixed reputation. It is not recommended as a hypothesis test to determine whether two different samples have the same variance. It is recommended for ANOVA where two estimates of the variance of the same sample are compared. While the F-test is not generally robust against departures from normality, it has been found to be robust in the special case of ANOVA. Citations from Moore & McCabe (2003): "Analysis of variance uses F statistics, but these are not the same as the F statistic for comparing two population standard deviations." (page 554) "The F test and other procedures for inference about variances are so lacking in robustness as to be of little use in practice." (page 556) "[The ANOVA F test] is relatively insensitive to moderate nonnormality and unequal variances, especially when the sample sizes are similar." (page 763) ANOVA assumes homoscedasticity, but it is robust. The statistical test for homoscedasticity (the F-test) is not robust. Moore & McCabe recommend a rule of thumb.
- Gelman (2005, p 2)
- Howell (2002, p 320)
- Montgomery (2001, p 63)
- Gelman (2005, p 1)
- Gelman (2005, p 5)
- "Section 5.7. A Glossary of DOE Terminology". NIST Engineering Statistics handbook. NIST. Retrieved 5 April 2012.
- "Section 4.3.1 A Glossary of DOE Terminology". NIST Engineering Statistics handbook. NIST. Retrieved 14 Aug 2012.
- Montgomery (2001, Chapter 12: Experiments with random factors)
- Gelman (2005, pp 20–21)
- Snedecor, George W.; Cochran, William G. (1967). Statistical Methods (6th ed.). p. 321.
- Cochran & Cox (1992, p 48)
- Howell (2002, p 323)
- Anderson, David R.; Sweeney, Dennis J.; Williams, Thomas A. (1996). Statistics for business and economics (6th ed.). Minneapolis/St. Paul: West Pub. Co. pp. 452–453. ISBN 0-314-06378-1.
- Anscombe (1948)
- Kempthorne (1979, p 30)
- Cox (1958, Chapter 2: Some Key Assumptions)
- Hinkelmann and Kempthorne (2008, Volume 1, Throughout. Introduced in Section 2.3.3: Principles of experimental design; The linear model; Outline of a model)
- Hinkelmann and Kempthorne (2008, Volume 1, Section 6.3: Completely Randomized Design; Derived Linear Model)
- Hinkelmann and Kempthorne (2008, Volume 1, Section 6.6: Completely randomized design; Approximating the randomization test)
- Bailey (2008, Chapter 2.14 "A More General Model" in Bailey, pp. 38–40)
- Hinkelmann and Kempthorne (2008, Volume 1, Chapter 7: Comparison of Treatments)
- Kempthorne (1979, pp 125–126, "The experimenter must decide which of the various causes that he feels will produce variations in his results must be controlled experimentally. Those causes that he does not control experimentally, because he is not cognizant of them, he must control by the device of randomization." "[O]nly when the treatments in the experiment are applied by the experimenter using the full randomization procedure is the chain of inductive inference sound. It is only under these circumstances that the experimenter can attribute whatever effects he observes to the treatment and the treatment only. Under these circumstances his conclusions are reliable in the statistical sense.")
- Freedman[full citation needed]
- Montgomery (2001, Section 3.8: Discovering dispersion effects)
- Hinkelmann and Kempthorne (2008, Volume 1, Section 6.10: Completely randomized design; Transformations)
- Bailey (2008)
- Montgomery (2001, Section 3-3: Experiments with a single factor: The analysis of variance; Analysis of the fixed effects model)
- Cochran & Cox (1992, p 2 example)
- Cochran & Cox (1992, p 49)
- Hinkelmann and Kempthorne (2008, Volume 1, Section 6.7: Completely randomized design; CRD with unequal numbers of replications)
- Moore and McCabe (2003, page 763)
- Gelman (2008)
- Montgomery (2001, Section 5-2: Introduction to factorial designs; The advantages of factorials)
- Belle (2008, Section 8.4: High-order interactions occur rarely)
- Montgomery (2001, Section 5-1: Introduction to factorial designs; Basic definitions and principles)
- Cox (1958, Chapter 6: Basic ideas about factorial experiments)
- Montgomery (2001, Section 5-3.7: Introduction to factorial designs; The two-factor factorial design; One observation per cell)
- Wilkinson (1999, p 596)
- Montgomery (2001, Section 3-7: Determining sample size)
- Howell (2002, Chapter 8: Power)
- Howell (2002, Section 11.12: Power (in ANOVA))
- Howell (2002, Section 13.7: Power analysis for factorial experiments)
- Moore and McCabe (2003, pp 778–780)
- Wilkinson (1999, p 599)
- Montgomery (2001, Section 3-4: Model adequacy checking)
- Moore and McCabe (2003, p 755, Qualifications to this rule appear in a footnote.)
- Montgomery (2001, Section 3-5.8: Experiments with a single factor: The analysis of variance; Practical interpretation of results; Comparing means with a control)
- Hinkelmann and Kempthorne (2008, Volume 1, Section 7.5: Comparison of Treatments; Multiple Comparison Procedures)
- Howell (2002, Chapter 12: Multiple comparisons among treatment means)
- Montgomery (2001, Section 3-5: Practical interpretation of results)
- Cochran & Cox (1957, p 9, "[T]he general rule [is] that the way in which the experiment is conducted determines not only whether inferences can be made, but also the calculations required to make them.")
- "The Probable Error of a Mean". Biometrika 6: 1–0. 1908. doi:10.1093/biomet/6.1.1.
- Montgomery (2001, Section 3-3.4: Unbalanced data)
- Montgomery (2001, Section 14-2: Unbalanced data in factorial design)
- Wilkinson (1999, p 600)
- Gelman (2005, p.1) (with qualification in the later text)
- Montgomery (2001, Section 3.9: The Regression Approach to the Analysis of Variance)
- Howell (2002, p 604)
- Howell (2002, Chapter 18: Resampling and nonparametric approaches to data)
- Montgomery (2001, Section 3-10: Nonparametric methods in the analysis of variance)
- Stigler (1986)
- Stigler (1986, p 134)
- Stigler (1986, p 153)
- Stigler (1986, pp 154–155)
- Stigler (1986, pp 240–242)
- Stigler (1986, Chapter 7 - Psychophysics as a Counterpoint)
- Stigler (1986, p 253)
- Stigler (1986, pp 314–315)
- The Correlation Between Relatives on the Supposition of Mendelian Inheritance. Ronald A. Fisher. Philosophical Transactions of the Royal Society of Edinburgh. 1918. (volume 52, pages 399–433)
- On the "Probable Error" of a Coefficient of Correlation Deduced from a Small Sample. Ronald A. Fisher. Metron, 1: 3-32 (1921)
- Scheffé (1959, p 291, "Randomization models were first formulated by Neyman (1923) for the completely randomized design, by Neyman (1935) for randomized blocks, by Welch (1937) and Pitman (1937) for the Latin square under a certain null hypothesis, and by Kempthorne (1952, 1955) and Wilk (1955) for many other designs.")
- Anscombe, F. J. (1948). "The Validity of Comparative Experiments". Journal of the Royal Statistical Society. Series A (General) 111 (3): 181–211. doi:10.2307/2984159. JSTOR 2984159. MR 30181.
- Bailey, R. A. (2008). Design of Comparative Experiments. Cambridge University Press. ISBN 978-0-521-68357-9. Pre-publication chapters are available on-line.
- Belle, Gerald van (2008). Statistical rules of thumb (2nd ed.). Hoboken, N.J: Wiley. ISBN 978-0-470-14448-0.
- Cochran, William G.; Cox, Gertrude M. (1992). Experimental designs (2nd ed.). New York: Wiley. ISBN 978-0-471-54567-5.
- Cohen, Jacob (1988). Statistical power analysis for the behavior sciences (2nd ed.). Routledge ISBN 978-0-8058-0283-2
- Cohen, Jacob (1992). "Statistics a power primer". Psychology Bulletin 112 (1): 155–159. doi:10.1037/0033-2909.112.1.155. PMID 19565683.
- Cox, David R. (1958). Planning of experiments. Reprinted as ISBN 978-0-471-57429-3
- Cox, D. R. (2006). Principles of statistical inference. Cambridge New York: Cambridge University Press. ISBN 978-0-521-68567-2.
- Freedman, David A.(2005). Statistical Models: Theory and Practice, Cambridge University Press. ISBN 978-0-521-67105-7
- Gelman, Andrew (2005). "Analysis of variance? Why it is more important than ever". The Annals of Statistics 33: 1–53. doi:10.1214/009053604000001048.
- Gelman, Andrew (2008). "Variance, analysis of". The new Palgrave dictionary of economics (2nd ed.). Basingstoke, Hampshire New York: Palgrave Macmillan. ISBN 978-0-333-78676-5.
- Hinkelmann, Klaus & Kempthorne, Oscar (2008). Design and Analysis of Experiments. I and II (Second ed.). Wiley. ISBN 978-0-470-38551-7.
- Howell, David C. (2002). Statistical methods for psychology (5th ed.). Pacific Grove, CA: Duxbury/Thomson Learning. ISBN 0-534-37770-X.
- Kempthorne, Oscar (1979). The Design and Analysis of Experiments (Corrected reprint of (1952) Wiley ed.). Robert E. Krieger. ISBN 0-88275-105-0.
- Lehmann, E.L. (1959) Testing Statistical Hypotheses. John Wiley & Sons.
- Montgomery, Douglas C. (2001). Design and Analysis of Experiments (5th ed.). New York: Wiley. ISBN 978-0-471-31649-7.
- Moore, David S. & McCabe, George P. (2003). Introduction to the Practice of Statistics (4e). W H Freeman & Co. ISBN 0-7167-9657-0
- Rosenbaum, Paul R. (2002). Observational Studies (2nd ed.). New York: Springer-Verlag. ISBN 978-0-387-98967-9
- Scheffé, Henry (1959). The Analysis of Variance. New York: Wiley.
- Stigler, Stephen M. (1986). The history of statistics : the measurement of uncertainty before 1900. Cambridge, Mass: Belknap Press of Harvard University Press. ISBN 0-674-40340-1.
- Wilkinson, Leland (1999). "Statistical Methods in Psychology Journals; Guidelines and Explanations". American Psychologist 54 (8): 594–604. doi:10.1037/0003-066X.54.8.594.
- Box, G. E. P. (1953). "Non-Normality and Tests on Variances". Biometrika (Biometrika Trust) 40 (3/4): 318–335. JSTOR 2333350.
- Box, G. E. P. (1954). "Some Theorems on Quadratic Forms Applied in the Study of Analysis of Variance Problems, I. Effect of Inequality of Variance in the One-Way Classification". The Annals of Mathematical Statistics 25 (2): 290. doi:10.1214/aoms/1177728786.
- Box, G. E. P. (1954). "Some Theorems on Quadratic Forms Applied in the Study of Analysis of Variance Problems, II. Effects of Inequality of Variance and of Correlation Between Errors in the Two-Way Classification". The Annals of Mathematical Statistics 25 (3): 484. doi:10.1214/aoms/1177728717.
- Caliński, Tadeusz & Kageyama, Sanpei (2000). Block designs: A Randomization approach, Volume I: Analysis. Lecture Notes in Statistics 150. New York: Springer-Verlag. ISBN 0-387-98578-6.
- Christensen, Ronald (2002). Plane Answers to Complex Questions: The Theory of Linear Models (Third ed.). New York: Springer. ISBN 0-387-95361-2.
- Cox, David R. & Reid, Nancy M. (2000). The theory of design of experiments. (Chapman & Hall/CRC). ISBN 978-1-58488-195-7
- Fisher, Ronald (1918). "Studies in Crop Variation. I. An examination of the yield of dressed grain from Broadbalk". Journal of Agricultural Science 11: 107–135.
- Freedman, David A.; Pisani, Robert; Purves, Roger (2007) Statistics, 4th edition. W.W. Norton & Company ISBN 978-0-393-92972-0
- Hettmansperger, T. P.; McKean, J. W. (1998). Robust nonparametric statistical methods. Kendall's Library of Statistics 5 (First ed.). New York: Edward Arnold. pp. xiv+467 pp. ISBN 0-340-54937-8. MR 1604954. Unknown parameter
- Lentner, Marvin; Thomas Bishop (1993). Experimental design and analysis (Second ed.). P.O. Box 884, Blacksburg, VA 24063: Valley Book Company. ISBN 0-9616255-2-X.
- Tabachnick, Barbara G. & Fidell, Linda S. (2007). Using Multivariate Statistics (5th ed.). Boston: Pearson International Edition. ISBN 978-0-205-45938-4
- Wichura, Michael J. (2006). The coordinate-free approach to linear models. Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge: Cambridge University Press. pp. xiv+199. ISBN 978-0-521-86842-6. MR 2283455.
|Wikiversity has learning materials about Analysis of variance|
- SOCR ANOVA Activity and interactive applet.
- Examples of all ANOVA and ANCOVA models with up to three treatment factors, including randomized block, split plot, repeated measures, and Latin squares, and their analysis in R
- NIST/SEMATECH e-Handbook of Statistical Methods, section 7.4.3: "Are the means equal?" | http://en.wikipedia.org/wiki/Analysis_of_variance | 13 |
27 | History of the United States
The United States
stayed out of World War I until 1917. But then, German acts of
aggression convinced most Americans of the need to join the war against
Germany. For the first time in its history, the United States mobilized
for a full-scale war on foreign territory.
A new place in the world (1917-1929)
The decade following
World War I brought sweeping changes. The economy entered a period of
spectacular--though uneven--growth. The booming economy and fast-paced
life of the decade gave it the nickname of the Roaring Twenties. But the
good times ended abruptly. In 1929, a stock market crash triggered the
worst and longest depression in America's history.
World War I and the peace
The United States in
the war. After World War I began in 1914, the United States repeatedly
declared its neutrality. But increasingly, German acts of aggression
brought America closer to joining the Allies. On May 7, 1915, a German
submarine sank the British passenger ship Lusitania. The attack killed
1,198 people, including 128 Americans. Woodrow Wilson won reelection to
the presidency in November 1916, using the slogan, "He Kept Us Out of
War." But three months later, German submarines began sinking American
merchant ships. This and other acts of aggression led the United States
to declare war on Germany on April 6, 1917.
The American people
rallied around their government's decision to go to war. Almost 2
million men volunteered for service, and about 3 million were
conscripted. On the home front, the spirit of patriotism grew to a fever
pitch. Americans willingly let the government take almost complete
control of the economy for the good of the war effort.
World War I ended in
an Allied victory with the signing of an armistice on Nov. 11, 1918.
The peace conference
and treaty. In 1919, the Allies held the Paris Peace Conference to draw
up the terms of the peace with Germany. Wilson viewed the conference as
an opportunity to establish lasting peace among nations. But the other
leading Allies were chiefly interested in gaining territory and war
payments from Germany. They adopted the Treaty of Versailles, which
ignored almost all of Wilson's proposals.
The Treaty of
Versailles did make provision for one of Wilson's proposals--an
association of nations (later called the League of Nations) that would
work to maintain peace. But the U.S. Senate failed to ratify (approve)
the Treaty of Versailles. Thus, the Senate rejected U.S. participation
in the League of Nations.
Life during the Roaring Twenties
In many ways, the
1920's marked the point at which the United States began developing into
the modern society it is today.
The role of American
women changed dramatically during the 1920's. The 19th Amendment to the
Constitution, which became law on Aug. 26, 1920, gave women the right to
vote in all elections. In addition, many new opportunities for education
and careers opened up to women during the decade.
Social change and
problems. Developments of the 1920's broadened the experiences of
millions of Americans. The mass movement to cities meant more people
could enjoy such activities as films, plays, and sporting events. Radio
broadcasting began on a large scale. The car gave people a new way to
get around. Cinemas became part of almost every city and town. The new
role of women also changed society. Many women who found careers outside
the home began thinking of themselves more as the equal of men, and less
as housewives and mothers.
The modern trends of
the 1920's brought about problems as well as benefits. Many Americans
had trouble adjusting to the impersonal, fast-paced life of cities. This
disorientation led to a rise in juvenile delinquency, crime, and other
The 18th Amendment to
the Constitution, called the prohibition amendment, caused unforeseen
problems. It outlawed the sale of alcoholic beverages throughout the
United States as of Jan. 16, 1920. Many otherwise law-abiding citizens
considered prohibition a violation of their rights. They ignored the law
and bought alcohol provided by underworld gangs.
all Americans saw the changes brought about during the Roaring Twenties
as being desirable. Many people yearned for a return to old American
traditions, a trend that was reflected in many areas of life. In
politics, it led to the return of a conservative federal government. In
his successful presidential campaign of 1920, Warren G. Harding used the
slogan "A Return to Normalcy." To many people, returning to "normalcy"
meant ending the strong role of the federal government that marked the
early 1900's. It also meant isolation, a turning away from the affairs
of the outside world.
In religion, the
trend toward tradition led to an upsurge of revivalism (emotional
religious preaching). Revival meetings were most common in rural areas,
but also spread to cities.
The Ku Klux Klan had
died out in the 1870's, but a new Klan gained a large following during
the 1920's. The new Klan had easy answers for Americans who were
troubled by modern problems. It blamed the problems on "outsiders,"
including blacks, Jews, Roman Catholics, foreigners, and political
The economy--boom and bust
During the 1920's,
the American economy soared to spectacular heights. Wartime government
restrictions on business ended. Conservatives gained control of the
federal government and adopted policies that aided big business.
But in spite of its
growth and apparent strength, the economy was on shaky ground. Only one
segment of the economy--manufacturing--prospered. Business executives
grew rich, but farmers and labourers became worse off. Finally, in
1929, wild speculation led to a stock market crash.
The American people grew tired of the federal government's
involvement in society that marked the Progressive Era and the war years. They
elected to Congress conservatives who promised to reduce the role of
government. Also, all three presidents elected during the
1920's--Harding, Calvin Coolidge, and Herbert Hoover--were Republicans
who agreed with the policy.
American manufacturers to develop new products, improve existing ones,
and turn out goods much faster and more cheaply than ever before. Sales
of such items as electric washing machines, refrigerators, and radios
soared. But the manufacturing boom depended most heavily on the growth
of the car industry. Before and during the 1920's, Henry Ford and others
refined car manufacturing to a science. The cost of cars continued to
drop and sales soared. In just 10 years between 1920 and 1930, the
number of cars registered in the United States almost tripled, growing
from about 8 million to 23 million.
labour did not share in the prosperity. A reduced market for farm goods
in war-torn Europe and a slowdown in the U.S. population growth led to a
decline in the demand for American farm products. Widespread poverty
among farmers and labourers cut into the demand for manufactured goods,
a contributing factor to the forthcoming depression.
speculation, and the crash.
The economic growth of the 1920's led more
Americans than ever to invest in the shares of corporations. The
investments, in turn, provided companies with a flood of new capital for
business expansion. As investors poured money into the stock market, the
value of shares soared. The upsweep led to widespread speculation, which
pushed the value of shares far beyond the level justified by earnings
investment practices led to the stock market crash of 1929. In late
October, a decline in share prices set in. Panic selling followed,
lowering share prices drastically and dragging investors to financial
ruin. The stock market crash combined with the other weaknesses in the
nation's economy to bring on the Great Depression of the 1930's. | http://www.theusaonline.com/history/a-new-place-in-the-world.htm | 13 |
15 | A stapedectomy is a surgical procedure of the middle ear performed to improve hearing. The world's first stapedectomy is credited to Dr. John J. Shea, Jr., performed in May, 1956, the first patient being a 54 year-old housewife who could no longer hear even with a hearing aid.
If the stapes footplate is fixed in position, rather than being normally mobile, then a conductive hearing loss results. There are two major causes of stapes fixation. The first is a disease process of abnormal mineralization of the temporal bone called otosclerosis. The second is a congenital malformation of the stapes.
In both of these situations, it is possible to improve hearing by removing the stapes bone and replacing it with a micro prosthesis - a stapedectomy, or creating a small hole in the fixed stapes footplace and inserting a tiny, piston-like prothesis - a stapedotomy. The results of this surgery are generally most reliable in patients whose stapes has lost mobility because of otosclerosis. Nine out of ten patients who undergo the procedure will come out with significantly improved hearing while less than 1% will experience worsened hearing ability or deafness. Successful surgery usually provides an increase in hearing ability of about 20 dB. That is as much difference as having your hands over both ears, or not. The relative success rate for this surgery varies considerably between surgeons. As for any surgical procedure, all other variables fixed, the more experience the surgeon has with the surgery, the better the outcome. Since stapes surgery is fairly rare, significantly better success rates are found at facilities that specialize in this procedure.
Indications of stapedectomy:
- Conductive deafness due to fixation of stapes.
- Air bone gap of at least 40 dB.
- Presence of Carhart's notch in the audiogram of a patient with conductive deafness.
- Good cochlear reserve as assessed by the presence of good speech discrimination.
Contraindications for stapedectomy:
- Poor general condition of the patient.
- Only hearing ear.
- Poor cochlear reserve as shown by poor speech discrimination scores
- Patient with tinnitus and vertigo
- Presence of active otosclerotic foci (otospongiosis) as evidenced by a positive flemmingo sign.
Complictions of stapedectomy:
- Facial palsy
- Vertigo in the immediate post op period
- Perilymph gush
- Floating foot plate
- Tympanic membrane tear
- Dead labyrinth
- Perilymph fistula
When a stapedectomy is done in a middle ear with a congenitally fixed footplate, the results may be excellent but the risk of hearing damage is greater than when the stapes bone is removed and replaced (for otosclerosis). This is primarily due to the risk of additional anomalies being present in the congenitally abnormal ear. If high pressure within the fluid compartment that lies just below the stapes footplate exists, then a perilymphatic gusher may occur when the stapes is removed. Even without immediate complications during surgery, there is always concern of a perilymph fistula forming postoperatively.
A modified stapes operation, called a stapedotomy, is thought by many otologic surgeons to be safer and reduce the chances of postoperative complications. In stapedotomy, instead of removing the whole stapes footplace, a tiny hole is made in the footplate - either with a microdrill or with a laser, and a prosthesis is placed to touch this area with movement of the tympanic membrane. This procedure greatly reduces the chance of a perilymph fistula (leakage of cochlear fluid) and can be further improved by the use of a tissue graft seal of the fenestra
Otosclerosis is a progressive degenerative condition of the temporal bone which can result in hearing loss.
Chronic conductive hearing loss (CHL) is the finding in almost all cases of otosclerosis (in fact should a person present with sensorineural hearing loss they would likely never be diagnosed with otosclerosis). This usually will begin in one ear but will eventually affect both ears with a variable course. On audiometry, the hearing loss is characteristically low-frequency, with higher frequencies being affected later. Sensorineural hearing loss (SNHL) has also been noted in patients with otosclerosis; this is usually a high-frequency loss, and usually manifests late in the disease.
Approximately 0.5% of the population will eventually be diagnosed with otosclerosis. Post mortem studies show that as many as 10% of people may have otosclerotic lesions of their temporal bone, but apparently never had symptoms warranting a diagnosis. Whites are the most affected race, with the prevalence in the Black and Asian populations being much lower. Females are twice as likely as males to be affected. Usually noticeable hearing loss begins at middle-age, but can start much sooner. The hearing loss often grows worse during pregnancy.
The disease can be considered to be heritable, but its penetrance and the degree of expression is so highly variable that it may be difficult to detect an inheritance pattern. Most of the implicated genes are transmitted in an autosomal dominant fashion.
The pathophysiology of otosclerosis is complex. The key lesions of otosclerosis are multifocal areas of sclerosis within the endochondral temporal bone. These lesions share some characteristics with Paget’s Disease, but they are not thought to be otherwise related. Histopathologic studies have all been done on cadaveric temporal bones, so only inferences can be made about progression of the disease histologically. This being said, it seems that the lesions go through an active “spongiotic” / hypervascular phase before developing into “sclerotic” phase lesions. There have been many genes and proteins identified that, when mutated, may lead to these lesions. Also there is mounting evidence that measles virus is present within the otosclerotic foci, implicating an infectious etiology (this has also been noted in Paget’s Disease).
CHL in otosclerosis is caused by two main sites of involvement of the sclerotic (or scar-like) lesions. The best understood mechanism is fixation of the stapes footplate to the oval window of the cochlea. This greatly impairs movement of the stapes and therefore transmission of sound into the inner ear (“ossicular coupling”). Additionally the cochlea’s round window can also become sclerotic, and in a similar way impair movement of sound pressure waves through the inner ear (“acoustic coupling”).
SNHL in otosclerosis is controversial. Over the past century, leading otologists and neurotologic researchers have argued whether the finding of SNHL late in the course of otosclerosis is due to otosclerosis or simply to typical presbycusis. There are certainly a few well documented instances of sclerotic lesions directly obliterating sensory structures within the cochlea and spiral ligament, which have been photographed and reported post-mortem. Other supporting data includes a consistent loss of cochlear hair cells in patients with otosclerosis; these cells being the chief sensory organs of sound reception. A suggested mechanism for this is the release of hydrolytic enzymes into the inner ear structures by the spongiotic lesions.
Treatment of otosclerosis relies on two primary options: hearing aids and a surgery called a stapedectomy. Hearing aids are usually very effective early in the course of the disease, but eventually a stapedectomy may be required for definitive treatment. Early attempts at hearing restoration via the simple freeing the stapes from its sclerotic attachments to the oval window were met with temporary improvement in hearing, but the conductive hearing loss would almost always recur. A stapedectomy consists of removing a portion of the sclerotic stapes footplate and replacing it with an implant that is secured to the incus. This procedure restores continuity of ossicular movement and allows transmission of sound waves from the eardrum to the inner ear. A modern variant of this surgery called a stapedotomy, is performed by drilling a small hole in the stapes footplate with a micro-drill or a laser, and the insertion of a piston-like prothesis. The success rate of either a stapedotomy or a stapedectomy depends greatly on the skill and the familiarity with the procedure of the surgeon.
Other less successful treatment includes fluoride administration, which theoretically becomes incorporated into bone and inhibits otosclerotic progression. This treatment cannot reverse conductive hearing loss, but may slow the progression of both the conductive and sensorineural components of the disease process. Recently, some success has been reported with bisphosphonate medications, which stimulate bone-deposition without stimulating bony destruction. | http://www.pangaeamedicine.com/Ear-Nose-Throat_22%7COtosclerosis-treatmentStapedectomy_325.xhtml | 13 |
37 | Profit Maximization is a process by which a firm determines the price and output of a product that yield the greatest profit. The total revenue-total cost method relies on the fact that profit equals revenue minus cost and the marginal revenue-marginal cost method is based on the fact that total profit in a perfectly competitive market reaches its maximum point where marginal revenue equals marginal cost.
Any costs incurred by a firm may be divided into two groups: fixed cost and variable cost. Fixed costs are incurred by the business at any level of output, including zero output. These include equipment maintenance, rent, wages and general upkeep. Variable costs change with the level of output, increasing as more product is generated. Materials consumed during production often have the largest impact on this category. Fixed cost and variable cost combined together to form the total cost.
Revenue is the amount of money that a company receives from its normal business activities, usually from the sale of goods and services.
Marginal cost and revenue, depending on whether the calculus approach is taken or not, are defined as either the change in cost or revenue as each additional unit is produced, or the derivative of cost and revenue with respect to quantity output. It may also be defined as the addition to total cost or revenue as output increase by a single unit. For instance, taking the first definition, if it costs a firm $400 to produce 5 units and $480 to produce 6 units, then the marginal cost of the 6th unit is $80.
Total Cost Total Revenue Method: To obtain the profit maximizing output quantity, we start by recognizing that profit is equal to total revenue minus total cost. Given a table of costs and revenues at each quantity, we can plot the data directly on a graph.
Marginal Cost Marginal Revenue Method: If total cost and total revenue are difficult to procure, then this method may also be used. For each unit sold, marginal profit equals marginal revenue minus marginal cost. Then, If marginal revenue is greater than marginal cost, marginal profit is positive and if the marginal revenue is less than the marginal cost then marginal profit is negative. And when marginal cost and marginal revenue are equal then the marginal profit is zero. Since, total profit increases when marginal profit is positive and total profit decreases when marginal profit is negative, total profit must be maximum when marginal profit is zero – or where marginal cost equals marginal revenue.
About the Company: HelpWithAssignment.com is an online tutoring company. Our network spans 3 continents and several countries. We offer three kinds of services: Assignment Help, Thesis Help and Online Tuitions for students in their college or University. http://www.helpwithassignment.com/
There are no comments yet, add one below. | http://www.helpwithassignment.com/blog/profit-maximization-economics-assignment-help/ | 13 |
16 | Intermediate scales are effective, but can be improved
In this blog we’ve been talking about learning goals and scales and the phases teachers go through in their development and use. We’ve also been sharing examples (and non-examples) along the way, with the goal of helping teachers and administrators develop a deeper understanding of what they should look like and how they should be used.
This post looks at intermediate scales, which are effective scales but can be improved.
Here’s what we know about learning goals and scales at the intermediate stage:
1. Learning goals are usually expressed in one of the two following forms: Students will understand or Students will be able to …
Teachers now recognize they can use more specific words in their learning goals than understand or be able to. As they use more precise verbs, the goals and scales become better.
Webb’s DOK suggests the following verbs for each level:
|Level 1||Level 2||Level 3||Level 4|
• Identify • Recite • List • Recognize • Report • Match • Define • Recall • Who, What, When, Where, Why
• Categorize • Relate • Investigate • Show • Compare • Estimate • Summarize • Classify • Predict • Construct
• Revise • Investigate • Assess • Develop • Construct • Relate • Develop logical argument • Differentiate • Draw conclusions
• Design • Connect • Synthesize • Critique • Analyze • Create • Prove
2. Scales are a learning progression.
3. Scales are good for one or two days rather than one to three weeks.
4. Teachers feel they must create a hundred or more scales for the whole year in each subject area, rather than 24 two-week scales.
5. Scales consist of a learning progression, guided by a taxonomy of knowledge — Marzano’s or Bloom’s, or Webb’s (see Table 2 below). The lower levels of the scale match up with the lower ends of the taxonomy.
|Taxonomy||Level 1||Level 2||Level 3||Level 4|
skills and concepts
Scales are constructed progressing from the lower levels of the taxonomy to the higher levels.
6. Teachers are starting to see that monitoring, and tracking student progress along a learning progression, are two different processes. Monitoring is important for checking on students, and the effectiveness of an instructional strategy, but different from measuring student progress on a learning progression along a scale.
7. Teachers realize a progression of learning is hierarchical in nature and increases in cognitive complexity — from Level 1 to Level 4 — and are learning how to construct and use them.
8. Teachers at this stage write goals and scales more for adults than for students; the students are not personalizing their own learning objectives.
Learning Goal: Students will be able to explain the events leading to the American Revolutionary War.
4.0 Students will be able to compare and contrast the events leading to the American Revolutionary War with the events leading to the French Revolution.
3.0. Students will be able to explain the events leading to the American Revolutionary War.
2.0 Students will be able to recall a few of the conflicts between the colonies and Great Britain.
1.0 Students will be able to recall the relationship between the colonies and Britain.
At the intermediate stage, learning goals and scales are no longer about monitoring and the scale depicts a progression of learning. They are not robust enough, they’re still too short term, and they’re designed more for adults than for students.
Next week, when we look at excellent learning goals and scales, we will see some good examples!
Setting the ground rules for learning goals and scales
In this blog we’ve been talking about learning goals and scales. Our next series will extend the concepts with a focus on the phases teachers go through, including examples and commentary.
So let’s start with a review of the developmental stages teachers go through as they move from beginner to expert. We’ll preface each stage with the foundational knowledge teachers need to implement the learning goal and scale under discussion.
It’s my hope that, after reading this blog, teachers can skip quickly through the developmental stages of implementing learning goals and scales, and become experts in their use. I’m also hoping that administrators will know what to look for in the classroom and will be able to coach teachers and provide feedback as they implement learning goals and scales.
Here’s what we know about learning goals and scales at the beginning stage:
1. Learning goals should be expressed in one of the two following forms (but it doesn’t always happen): Students will understand or Students will be able to…..
2. Scales are good for one or two days rather than two or three weeks.
3. Teachers feel they must create a hundred or more scales for the whole year in a given subject area, rather than 24 two-week scales.
4. Scales consist of student indicators of how they think they are doing, as opposed to a learning progression that starts with the less complex and continues to the more complex. Or the scale may consist of a “fist of five” with students indicating how they are doing by holding up a certain number of fingers or giving a thumb up if they know it, thumb sideways if they kind of know it, or thumb down if they don’t know it.
5. Teachers sometimes confuse monitoring the class (determining how students think they are doing at the moment) for how they are progressing along a scale or learning progression.
6. The learning progression of the scale does not reach the higher levels of thinking, and the lessons lack rigor.
7. Goals and scales are not personalized by students, but written for adults.
Example of beginning learning goals and scales:
Learning Goal: Students will understand the parts of a story, beginning, middle, and end.
Scale: Hold up:
Five fingers if you understand the parts of a story well enough to teach others.
Three fingers if you understand the parts of a story.
One finger if you understand the parts of a story with help.
Your fist if you do not understand the parts of a story.
The beginning goal and scale do not provide specific feedback as to where the students are in their understanding, so it will not increase student achievement. The scale is not measuring students’ progression of learning, but instead the students are indicating whether or not they understood what was taught. This is monitoring, not tracking student progress. Monitoring is important for determining how effective an instructional strategy is or how students think they are doing at the moment, but it is not a good learning goal and scale!
Next Week: Getting better: Learning goals and scales at beginning plus!
In The Power of Feedback (1992), John Hattie concluded that the most powerful single modification that enhances learning and achievement is feedback – but Hattie also cautioned that feedback must be of the right type, timed correctly and properly framed, to be effective. Let’s look at the Marzano Teacher Evaluation Model and see how the model is designed for administrators to provide teachers with specific, valuable feedback at many different points.
With each formal observation, during Domain 2 Planning and Preparing, teacher and administrator talk about the instruction to be provided in the observed lesson. That discussion might include:
• How the lesson scaffolds within the unit and the lesson
• How the teacher will use technology and traditional resources
• What the teacher will do for English Language Learners, Special Education Students, and Student Who Lack Support for Schooling
Summative feedback is provided to teachers in Domain 3, Reflecting on Teaching. After the lesson is completed, in Domain 3, teacher and administrator discuss how the planned lesson went and how instruction might be improved. Together, teacher and administrator develop a professional growth plan to give the teacher a specific area of focus and the structure to increase his or her expertise.
Feedback through Learning Goals
Secondly, learning goals provide teachers with feedback at the unit level. Teachers base their unit planning on the state standard or a cluster of related standards, and plan learning goals for each of the big ideas of each unit. As they track student progress for each learning goal, teachers can discuss how students are doing with teacher mentors and observers. As teachers track through the learning goals, they can ask themselves:
• Are students understanding the information, and are they able to progress through Lesson Segment 2, Addressing Content, and each of the three design questions?
• Are students able to use the critical thinking skills that accompany each of the design questions?
• Are the chunks of content the right size? (If too large, students will get lost. If too small, students will be bored).
By tracking student progress with each learning goal, teacher and student both receive unit-level feedback to help them improve.
The third mechanism in the Marzano Teacher Evaluation Model for feedback is the monitoring addressed in the scale for each strategy. At the developing level for each strategy, the teacher monitors less than a majority of the students for the desired effect. At the applying level, a majority of students, and at the innovative level, all students are monitored and reach the desired effect. Through their monitoring, teachers receive feedback on how the strategy is working. They can make the necessary adjustments to get all students to the desired effect.
Feedback through Deliberate Practice
Finally, by using Deliberate Practice, teachers collaborate with administrators to select specific strategies to practice intensively. The administrator observes teachers using selected strategies and provides specific, actionable feedback for improvement.
The Marzano Teacher Evaluation Model has embedded mechanisms for feedback – in fact, feedback and improvement are one of the model’s primary purposes. We’re Building Expertise! The model provides a common language of instruction so that teacher and administrator can be on the same page, working collaboratively toward the mutual goal of steady instructional improvement. The scale for each strategy allows for transparency and mutual understanding the teacher’s current level of expertise, and what he or she needs to do to improve and move up the scale!
Will you be joining us for the Marzano Conference 2013 in Orlando this summer? Register now for priority sessions.
I have been teaching since the ‘90s and went through Dimensions of Learning training and rubric writing way back when, and now using rubrics has made its way back into our grading. I love using rubrics and feel it provides an accurate explanation of where a student’s level of understanding is.
My questions are:
1) When writing a rubric for a level 2 question, is the question supposed to be a simpler version of the level 3 question, or a simpler skill? For example if a level 3 question is: students can add and subtract fractions using unlike denominators, should a level 2 question have problems with unlike denominators, or denominators that are the same (a simpler skill)?
2) The level 3 questions ask students to explain how they solved things. Do the explanations need to be lengthy? A level 4 question is above what we’ve worked on in the classroom, being able to apply the skill to a situation they haven’t worked on in class. In many cases these are word problems. Is that accurate? My previous training is that a level 3 is what you expect, a level 2 score means a student has some errors so they haven’t reached the level of understanding I am expecting, a level one means they need teacher assistance to complete the assessment, and a 4 is their understanding is above what I am expecting. Similar to the new rubrics except now we level the questions for the students.
I work with teachers often on learning goals and scales (rubrics) and I know that you aren’t the only one with these questions!
I love that you are using something that you learned in training to help you with teaching and assessing. Lessons are more purposeful when teachers use the level of the scale (rubric) to inform their planning. It’s best to align the activities/assignments in a learning cycle to the levels of a scale (rubric). Starting at level 1 and progressing purposefully up to level 4.
In that manner you can assess your students’ progress toward the goal within the learning cycle/unit. A teacher can also create assessments that include questions at different levels of the scale (rubric) to judge if students are able to demonstrate the foundational knowledge and skills of the learning goal. This sounds like what you are asking for clarification on. Your explanation of the levels is correct, so I’ll just address your questions. If you want more information on creating assessments based on scales (rubrics), I recommend Formative Assessments and Standards Based Grading by Dr. Marzano.
Scale Levels Assess Student Skills
In answer to your first question, whether the level 2 question should be a simpler version (of the question) or a simpler skill, the short answer is: a simpler skill. Level 2 on the scale are foundational skills that will build toward the skills the student is being asked to demonstrate at a level 3. In your example, a level 2 might be that the student can add and/or subtract fractions with the same denominators, or they may be able to find common denominators but not add fractions. Either of those are foundational skills necessary to achieve the learning goal of adding and subtracting unlike denominators (CCSS 5.NF1). If you’re looking for more information on how to create a scale, I would suggest Penny Sell’s blog post The Power of Design Question 1 – Part 2: Creating Scales to Accompany Learning Goals.
Pulling Learning Goals Apart
If you merely write a simpler question, what you will have made simpler is decoding of the question. That is a different learning goal in which students will be able to solve word problems involving adding and subtracting fractions (CCSS 5.NF2). We tend to write questions for our students with both of those skills intertwined, and at some point in their learning, students should have practice questions in which those skills are put together. But as we assess students, we need to pull those learning goals apart at the lower level questions so that we can know which learning goal the student may have mastered and which he or she may be struggling with.
In answer to your question about level 4 questions being a word problem, I would say that because level 4 involves extension or application, many times these questions are word problems. In the Common Core standards though, solving word problems is a different standard (CCSS 5.NF2) than adding and subtracting unlike fractions (CCSS 5.NF1), so we can’t save the word problems for level 4 questions. In order to assess this standard, a level 3 question would need to be a word problem, since standards are the level of the learning goal which is level 3 on the scale (rubric).
I hope that this explanation is helpful. I applaud your work with scales (rubrics) and your desire to get it right.
Read more on creating learning goals and scales here.
Join the conversation! If you have successes or questions you’d like to share, please drop us a note in the comments section below.
Scaffold your lessons to build toward your targeted goal.
In my last post, I discussed the importance of setting clear learning goals to communicate to students what they are learning and why. The second key to harnessing the power of Design Question 1 in the Marzano Teacher Evaluation Model is to create a scale for each learning goal that is used to provide feedback to students. Creating scales is likely the strategy in the framework that is most new to many teachers; therefore it takes time to embed this work easily into the routine of your teaching. Like any skill that we learn throughout our lives, the more we practice, the easier it will get!
The learning goal sets our target for learning and the scale is used to let students and parents know where in the progression of learning toward that target the student falls. Unlike a grade, a scale rating is specific to the learning goal and will show growth over time. Scales utilize the concept of formative assessment; the activities and assessments we use to measure a student’s growth during the learning. Teachers consistently use these types of assessments to check for student understanding and to make adjustments to instruction. By adding scales to our classroom practice, we can easily use the formative assessments to provide specific feedback to students about their progression toward the learning goal. Follow the steps below to get starrning go parts
Step 1 – Write a scale for each learning goal
Write a scale for the overall learning goal you are teaching, not the activities and assignments you are using each day to help students reach the goal. You might have 2-3 learning goals you are addressing within a unit of instruction and the scale is attached to those goals.
Example: Students will be able to convert between standard and nonstandard unit measurements.
Step 2 – Break down the learning goal into parts
Unpack your learning goal in a way that shows how students will progress toward the goal. It will help to think about how you scaffold your teaching along the way. What are the simpler parts of the goal that you teach first? Are there building blocks along the way to the accomplishment of the goal? These building blocks can become the steps of your scale and should progress from the simpler parts to the more complex overall goal.
1. Students will be able to make simple measurements in standard units.
2. Students will be able to make simple conversions within standard or nonstandard unit measurements.
Step 3 – Place the unpacked learning goal into your scale
Using a simple scale, place the simpler parts of your goal at level 2 and the target learning goal at level 3 on the scale. Level 4 on the scale should be an application of the target goal that requires students to go beyond what was explicitly taught or a more complex version of the target learning goal.
|4.0||Students will be able to:
Students will be able to:
Students will be able to:
|1.0||With help, partial success at level 2.0 and 3.0 content|
|0.0||Even with help, no success|
Here’s an example of one I’ve used for staff development:
|4.0||In addition to 3.0 content, participants will be able to:
Participants will be able to:
Participants will demonstrate understanding of:
|1.0||With help, partial success at level 2.0 and 3.0 content|
Step 4 – Share your scale with your students
Use the scale with your students to track their progress toward the goal and celebrate their success! I’ll talk more about this last power strategy in my next post!
Check out some scales that other teachers have written in this scale bank.
Do you have any success stories to share about using scales in your classroom? Share them in your comments below!
One of the other big questions I get about learning goals is: Who needs to create them?
If you have other same grade-level teachers in your school, it makes sense to create learning goals and scales together as a grade level and share them. There is also bank of scales at www.marzanoresearch.com under the free resources tab. You sign up for the site, but it doesn’t cost anything. Some districts use curriculum writing teams in the summer to create learning goals and scales that teachers have the option of using.
One of our readers, a middle school language teacher, has also written us to say that she enlists help from her students in writing goals – their collaboration helps them buy in and feel a sense of ownership for their learning targets.
Remember that the key is to establish an initial target and provide feedback to students with information regarding their progress toward it. How learning goals are created, or how they are accessible, are important logistics—but they aren’t in themselves what increase student learning. Student learning comes from the feedback you give students as they progress toward the learning goal (which is another blog post in the making! ).
Share your expertise! Tell us how you create and post your learning goals and scales. Or use the comments space below to ask us a question. We’d love to hear from you.
This is Part 1 of a response to a question we received this week. Look for Part 2 – Who Creates the Scales? – tomorrow.
Dear Sharing Expertise:
I am a second grade teacher. We are supposed to be posting objectives, essential questions, and benchmarks for all subjects we teach every day. What are your thought on this? We really haven’t received much training, and everyone seems a bit confused (including me). Do you have any management tips or ideas to make this any easier? Just finding the space to post all of this stuff seems hard. Thanks!—Nicole
Good questions! It sounds like in your district you are using benchmarks as learning goals, and also using essential questions to help spark student interest at that same level. Here’s one way to sort it out:
Objectives are daily learning targets. These daily targets will scaffold toward the learning goal (benchmark/essential question).
Why you are being asked to post them
Here’s a little background about learning goals (benchmarks and essential questions) and objectives to set the stage:
In looking at research conducted by several education experts, Dr. Robert Marzano addresses the importance of setting clear learning goals. “Clear goals,” he says, “establish an initial target. Feedback provides students with information regarding their progress toward that target. Goal setting and feedback used in tandem are probably more powerful than either one in isolation. In fact, without clear goals, it might be difficult to provide effective feedback.” (See The Art and Science of Teaching, p. 12)
So the key is to establish an initial target and provide feedback to students with information regarding their progress toward that target.
Now let’s look at the logistics. Remember:
Not all goals need to be visible all the time.
The important thing to remember about learning goals, objectives, and scales is that students need access to them as they are learning. How that happens is flexible. Students don’t need access to all the goals all day. For instance, math goals and scales don’t need to be visible during language arts. Students only need access to the goals that pertain to their learning today. They don’t need the scales from two months ago, or from three weeks in the future.
Making Learning Goals Accessible:
Here are 6 ideas about how to make learning goals accessible:
1. Flip Charts. You can use one page for each of the different subjects so that you can flip to the subject specific goal every time you change subjects.
2. Work Packets. Some teachers use the learning goal and scale as part of a work packet. For example, at the start of a unit, the learning goals and scales are listed, as well as the daily objectives. You create a system so students know which goal to focus on each day.
3. Laminated Templates. You can laminate a template and write on it with an erasable marker (See picture).
4. Attached to assignments. Some teachers put the learning goal and objective on every activity and assignment so that the student can reference them as they are completing the assignment.
5. Taped to desks. I have seen teachers tape long-term goals to students’ desks—such as cooperative learning or other non-cognitive goals that last all year.
6. Index cards. You can make index cards with the goals for a unit and attach them to a large O ring so that students keep them on their desk or in their binders.
If you would like to read more about creating learning goals and scales, see Penny Sell’s blog post here. Or see Dr. Marzano’s book: Designing and Teaching Learning Goals and Objectives (2009)
There are many more creative ways teachers make learning goals and scales accessible to students. If this list sparked your imagination, please feel free to add more ideas in the comment section.
Make the connection between the Marzano Taxonomy and Common Core State Standards for the most effective assessments of student learning
I was having a conversation with some educator friends recently. We were talking about effective classroom strategies and the conversation led to the creation of learning goals and scales. These educators were confident with creating learning goals from the Common Core State Standards, but they struggled a little with creating scales that related to those standards, and making sure they differentiate for the students they have in their classrooms.
I suggested a five-step process for creating scales that align with their learning goals.
Step 1: Create your learning goal.
The first step is to create your target learning goal from your content standards. For most, that means the Common Core State Standards (CCSS). The CCSS lend themselves well to creating learning goals.
Target Learning Goal Example: RI.5.3. Explain the relationships or interactions between two or more individuals, events, ideas, or concepts in a historical, scientific, or technical text based on specific information in the text. (This is a Reading for Informational Text, Grade 5 CCSS Standard.)
Step 2: Place the learning goal at the 3.0 position on the scale.
Place that exact learning goal in the 3.0 or proficient (meets the standard) spot in your scale. This is the target learning goal for the majority of students in the class. In this case the target learning goal is at the Comprehension level of Marzano’s Taxonomy
Step 3: Create a more complex learning goal and place it in the 4.0 position.
Create a more complex learning goal that uses the same content idea as your target learning goal but raises the level of thinking required. To do this, use the PDF of Marzano’s Taxonomy, Useful Verbs. At 4.0, the highest level of the scale, the learning goal should be in the top two levels of the taxonomy – Analysis and Knowledge Utilization. Both of the examples below are at the Analysis level.
Example #1 Analysis at 4.0 Level: Explain the relationships or interactions between two or more individuals, events, ideas, or concepts in a historical, scientific, or technical text based on specific information in the text and determine what inferences can be made based on this information
Example #2 Analysis at 4.0 Level Explain the relationships or interactions between two or more individuals, events, ideas, or concepts in a historical, scientific, or technical text based on specific information in the text, and compare and contrast these individuals, events, ideas or concepts.
Step 4: Create a simpler learning goal and place it in the 2.0 position.
Create a more simplified learning goal that uses the same content ideas as your target learning goal. Again, you can use Marzano’s Taxonomy to help you. In this case, the 2.0 learning goal should be at the first level of the taxonomy – Retrieval.
Example of Retrieval at 2.0 Level:
Describe an individual, event, idea, or concept in a historical, scientific, or technical text based on specific information in the text.
Step 5: 1.0 and 0 do not have learning goals associated but are representative of a student’s performance or lack of performance.
You now have a scale for the Common Core ELA standard RI.5.3. This process really helped these educators create scales aligned with their learning goals.
Does this process help you as you create scales aligned to your learning goals? Do you have a different process for creating scales? Share your ideas in the comments section or ask us a question. We’re standing by to answer!
The Power of Dr. Marzano’s Design Question 1
As teachers across the country embark on a new school year, it’s the perfect time to reflect on the power of clearly defining learning goals so students know what they are learning and why. Design Question 1 in Dr. Marzano’s Teacher Evaluation Model incorporates specific strategies to achieve just that:
• Providing Clear Learning Goals and Scales
• Tracking Student Progress
• Celebrating Success
When a classroom teacher embraces these strategies and involves students in an authentic way, the combination can boost student success, invigorate the teaching/learning process, and create a classroom culture where students take more responsibility for their own learning.
An important step in utilizing this trio of Marzano strategies is to use the standards or benchmarks of your curriculum to write learning goals that communicate the essence of the standards in a way that is meaningful to students. Use the following guidelines as you practice writing learning goals that will drive your instruction and will be most beneficial to your students.
1. Learning Goals for Two Kinds of Knowledge
Write learning goals that communicate what students need declarative and procedural knowledge rather than statements that communicate the activities students will do to reach the goal. Keeping this distinction in mind will give direction to the day-to-day activities and assignments that you design for students and provide a clear intention for student learning.
2. Keep Goals Specific
To gain the maximum impact on student achievement, write goals that specifically target the intended learning rather than goals that are too broad or general. It is best to aim for the middle ground here as the intent is not to overwhelm yourself with too many goals, but to make them specific enough that students have a clear understanding of the target.
3. Aim for Moderate Difficulty
Consider the level of difficulty the learning goals will present for your students. Research fully supports that students are most motivated by goals that are moderately difficult; attainable, but not too easy nor too difficult. If you are teaching a diverse group of learners, it may be valuable to write learning goals at more than one level of difficulty.
4. Monitor Student Understanding
Use language that is student-friendly and think about how you will monitor student understanding of the goals. The intent of the strategy is not just that you have a goal posted, rather that students fully understand what the goal means for their learning.
5. Know What “Mastery” Looks Like
Talk with your colleagues about what mastery of the goal looks like for students. Having a clear picture in your mind about how students will demonstrate that mastery will enable you to design learning and assessment tasks that match the intent of the learning goal.
6. Get Student Input
Involve your students in writing and/or revising goals so that they fully understand the target and are invested in the learning process that will be guided by the goal.
The start of school is just around the corner. As you re-engage yourself in the delight a new group of students will bring, focus your thoughts on how you can make the standards you will teach meaningful to each child. This first step of Design Question 1 holds great power to make this year a great success for your students.
In my next post, I’ll focus on creating the scales that will accompany your goals as a means to provide feedback on student performance. Enjoy your last days of summer!
How do you make standards meaningful for your individual students? Please share your ideas in the comments section. Or ask us a question and we’ll do our best to work it through with you.
For further information about teacher growth and student achievement, visit Marzano Center | http://www.marzanocenter.com/blog/tag/learning-goals/ | 13 |
41 | Malaria in humans is caused by one of four protozoan species of the genus Plasmodium: P. falciparum , P. vivax , P. ovale , or P. malariae . All species are transmitted by the bite of an infected female Anopheles mosquito. Occasionally, transmission occurs by blood transfusion, organ transplantation, needle-sharing, or congenitally from mother to fetus. Although malaria can be a fatal disease, illness and death from malaria are largely preventable.
Malaria is a major international public health problem, causing 300-500 million infections worldwide and approximately 1 million deaths annually. Information about malaria risk in specific countries (Yellow Fever Vaccine Requirements and Information on Malaria Risk and Prophylaxis, by Country) is derived from various sources, including WHO. The information presented herein was accurate at the time of publication; however, factors that can change rapidly and from year to year, such as local weather conditions, mosquito vector density, and prevalence of infection, can markedly affect local malaria transmission patterns. Updated information may be found on the CDC Travelers' Health website: http://www.cdc.gov/travel .
Malaria transmission occurs in large areas of Central and South America, the island of Hispaniola (the Dominican Republic and Haiti), Africa, Asia (including the Indian Subcontinent, Southeast Asia, and the Middle East), Eastern Europe, and the South Pacific.
The estimated risk for a traveler's acquiring malaria differs substantially from area to area. This variability is a function of the intensity of transmission within the various regions and of the itinerary and time and type of travel. From 1985 through 2002, 11,896 cases of malaria among U.S. civilians were reported to CDC. Of these, 6,961 (59%) were acquired in sub-Saharan Africa; 2,237 (19%) in Asia; 1,672 (14%) in the Caribbean and Central and South America; and 822 (7%) in other parts of the world. During this period, 76 fatal malaria infections occurred among U.S. civilians; 71 (93%) were caused by P. falciparum , of which 52 (73%) were acquired in sub-Saharan Africa.
Thus, most imported P. falciparum malaria among U.S. travelers was acquired in Africa, even though only 467,940 U.S. residents traveled to countries in that region in 2002. In contrast, that year 21 million U.S. residents traveled from the United States to other countries where malaria is endemic (including 19 million travelers to Mexico). This disparity in the risk for acquiring malaria reflects the fact that the predominant species of malaria transmitted in sub-Saharan Africa is P. falciparum , that malaria transmission is generally higher in Africa than in other parts of the world, and that malaria is often transmitted in urban areas as well as rural areas in sub-Saharan Africa. In contrast, malaria transmission is generally lower in Asia and South America, a larger proportion of the malaria is P. vivax , and most urban areas do not have malaria transmission.
Risk to Travelers
Estimating the risk for infection for various types of travelers is difficult. Risk can differ substantially even for persons who travel or reside temporarily in the same general areas within a country. For example, travelers staying in air-conditioned hotels may be at lower risk than backpackers or adventure travelers. Similarly, long-term residents living in screened and air-conditioned housing are less likely to be exposed than are persons living without such amenities, such as Peace Corps volunteers. Travelers should also be reminded that even if one has had malaria before, one can get it again and so preventive measures are still necessary.
Persons who have been in a malaria risk area, either during daytime or nighttime hours, are not allowed to donate blood in the United States for a period of time after returning from the malarious area. Persons who are residents of nonmalarious countries are not allowed to donate blood for 1 year after they have returned from a malarious area. Persons who are residents of malarious countries are not allowed to donate blood for 3 years after leaving a malarious area. Persons who have had malaria are not allowed to donate blood for 3 years after treatment for malaria.
Malaria is characterized by fever and influenza-like symptoms, including chills, headache, myalgias, and malaise; these symptoms can occur at intervals. Malaria may be associated with anemia and jaundice, and P. falciparum infections can cause seizures, mental confusion, kidney failure, coma, and death. Malaria symptoms can develop as early as 7 days after initial exposure in a malaria-endemic area and as late as several months after departure from a malarious area, after chemoprophylaxis has been terminated.
No vaccine is currently available. Taking an appropriate drug regimen and using anti-mosquito measures will help prevent malaria. Travelers should be informed that no method can protect completely against the risk for contracting malaria.
Personal Protection Measures
Because of the nocturnal feeding habits of Anopheles mosquitoes, malaria transmission occurs primarily between dusk and dawn. Travelers should be advised to take protective measures to reduce contact with mosquitoes, especially during these hours. Such measures include remaining in well-screened areas, using mosquito bed nets (preferably insecticide-treated nets), and wearing clothes that cover most of the body. Additionally, travelers should be advised to purchase insect repellent for use on exposed skin. The most effective repellent against a wide range of vectors is DEET (N, N-diethylmetatoluamide), an ingredient in many commercially available insect repellents. The actual concentration of DEET varies widely among repellents. DEET formulations as high as 50% are recommended for both adults and children >2 months of age (See Protection against Mosquitoes and Other Arthropod Vectors).
Travelers not staying in well-screened or air-conditioned rooms should be advised to use a pyrethroid-containing flying-insect spray in living and sleeping areas during evening and nighttime hours. They should take additional precautions, including sleeping under bed nets (preferably insecticide-treated bed nets). In the United States, permethrin (Permanone) is available as a liquid or spray. Overseas, either permethrin or another insecticide, deltamethrin, is available and may be sprayed on bed nets and clothing for additional protection against mosquitoes. Bed nets are more effective if they are treated with permethrin or deltamethrin insecticide; bed nets may be purchased that have already been treated with insecticide. Information about ordering insecticide-treated bed nets is available at http://www.travmed.com , telephone 1-800- 872 8633, fax: 413-584-6656; or http://www.travelhealthhelp.com , telephone 1-888-621-3952.
Chemoprophylaxis is the strategy that uses medications before, during, and after the exposure period to prevent the disease caused by malaria parasites. The aim of prophylaxis is to prevent or suppress symptoms caused by blood-stage parasites. In addition, presumptive anti-relapse therapy (also known as terminal prophylaxis) uses medications towards the end of the exposure period (or immediately thereafter) to prevent relapses or delayed-onset clinical presentations of malaria caused by hypnozoites (dormant liver stages) of P. vivax or P. ovale .
In choosing an appropriate chemoprophylactic regimen before travel, the traveler and the health-care provider should consider several factors. The travel itinerary should be reviewed in detail and compared with the information on areas of risk in a given country to determine whether the traveler will actually be at risk for acquiring malaria. Whether the traveler will be at risk for acquiring drug-resistant P. falciparum malaria should also be determined. Resistance to antimalarial drugs has developed in many regions of the world. Health-care providers should consult the latest information on resistance patterns before prescribing prophylaxis for their patients. (See section "Malaria Hotline" below for details about accessing this information from CDC.)
The resistance of P. falciparum to chloroquine has been confirmed in all areas with P. falciparum malaria except the Dominican Republic, Haiti, Central America west of the Panama Canal, Egypt, and some countries in the Middle East. In addition, resistance to sulfadoxine-pyrimethamine (e.g., Fansidar) is widespread in the Amazon River Basin area of South America, much of Southeast Asia, other parts of Asia, and, increasingly, in large parts of Africa. Resistance to mefloquine has been confirmed on the borders of Thailand with Burma (Myanmar) and Cambodia, in the western provinces of Cambodia, and in the eastern states of Burma (Myanmar).
Malaria chemoprophylaxis with mefloquine or chloroquine should begin 1-2 weeks before travel to malarious areas; prophylaxis with doxycycline, atovaquone/proguanil, or primaquine can begin 1-2 days before travel. Beginning the drug before travel allows the antimalarial agent to be in the blood before the traveler is exposed to malaria parasites. Chemoprophylaxis can be started earlier if there are particular concerns about tolerating one of the medications. Starting the medication 3-4 weeks in advance allows potential adverse events to occur before travel. If unacceptable side effects develop, there would be time to change the medication before the traveler's departure.
The drugs used for antimalarial chemoprophylaxis are generally well tolerated. However, side effects can occur. Minor side effects usually do not require stopping the drug. Travelers who have serious side effects should see a health-care provider. See the section below on "Adverse Reactions and Contraindications" for more detail on safety and tolerability of the drugs used for malaria prevention. The health-care provider should establish whether the traveler has previously experienced an allergic or other reaction to one of the antimalarial drugs of choice. In addition, the health-care provider should determine whether medical care will be readily accessible during travel should the traveler develop intolerance to the drug being used and need to change to a different agent.
General Recommendations for Prophylaxis
Chemoprophylaxis should continue during travel in the malarious areas and after leaving the malarious areas (4 weeks after travel for chloroquine, mefloquine, and doxycycline, and 7 days after travel for atovaquone/proguanil and primaquine). In comparison with drugs with short half-lives, which are taken daily, drugs with longer half-lives, which are taken weekly, offer the advantage of a wider margin of error if the traveler is late with a dose. For example, if a traveler is 1-2 days late with a weekly drug, prophylactic blood levels can remain adequate; if the traveler is 1-2 days late with a daily drug, protective blood levels are less likely to be maintained.
Transmission and Symptoms
Malaria is a serious disease that is transmitted to humans by the bite of an infected female Anopheles mosquito. Symptoms may include fever and flu-like illness, including chills, headache, muscle aches, and fatigue. Malaria may cause anemia and jaundice. Plasmodium falciparum infections, if not immediately treated, may cause kidney failure, coma, and death.
Malaria can often be prevented by using antimalarial drugs and by using personal protection measures to prevent mosquito bites. However, in spite of all protective measures, travelers may still develop malaria.
Malaria symptoms will occur at least 7 to 9 days after being bitten by an infected mosquito. Fever in the first week of travel in a malaria-risk area is unlikely to be malaria; however, any fever should be promptly evaluated.
Malaria is always a serious disease and may be a deadly illness. If you become ill with a fever or flu-like illness either while traveling in a malaria-risk area or after you return home (for up to 1 year), you should seek immediate medical attention and should tell the physician your travel history.
|Malaria Risk by Country
Burundi: All areas. Comoros: All areas. Djibouti: All areas. Eritrea: All areas at altitudes lower than 2,200 meters (7,218 feet). No risk in Asmara. Ethiopia: All areas at altitudes lower than 2,000 meters (6,561 feet). No risk in Addis Ababa. Kenya: All areas (including game parks) at altitudes lower than 2,500 meters (8,202 feet). No risk in Nairobi. Madagascar: All areas. Malawi: All areas. Mauritius: Rural areas only. No risk on Rodrigues Island. Mayotte (French territorial collectivity): All areas. Mozambique: All areas. Réunion (France): No risk. Rwanda: All areas. Seychelles: No risk. Somalia: All areas. Tanzania: All areas at altitudes lower than 1,800 meters (5,906 feet). Uganda: All areas.
All travelers to malaria-risk areas in East Africa , except travelers to Mauritius* , including infants, children, and former residents of East Africa should take one of the following antimalarial drugs (listed alphabetically):
- primaquine (in special circumstances).
*Travelers to the malaria risk areas of Mauritius should take chloroquine to prevent malaria ( see below ). All travelers to malaria-risk areas in Mauritius, including infants, children, and former residents of Mauritius, should take chloroquine as their antimalarial drug.
NOTE: Chloroquine is NOT an effective antimalarial drug for the other countries in East Africa and should not be taken to prevent malaria in these countries.
Most antimalarial drugs are well-tolerated; most travelers do not need to stop taking their drug because of side effects. However, if you are particularly concerned about side effects, discuss the possibility of starting your drug early (3-4 weeks in advance of your trip) with your health care provider. If you cannot tolerate the drug, ask your doctor to change your medication.
Atovaquone/proguanil (brand name: Malarone™)
Atovaquone/proguanil is a fixed combination of two drugs, atovaquone and proguanil. In the United States, it is available as the brand name, Malarone.
Directions for Use
The adult dosage is 1 adult tablet (250mg atovaquone/100mg proguanil) once a day.
- Take the first dose of atovaquone/proguanil 1 to 2 days before travel to the malaria-risk area.
- Take atovaquone/proguanil once a day during travel in the malaria-risk area.
- Take atovaquone/proguanil once a day for 7 days after leaving the malaria-risk area.
- Take the dose at the same time each day with food or milk.
Atovaquone/proguanil Side Effects and Warnings
The most common side effects reported by travelers taking atovaquone/proguanil are abdominal pain, nausea, vomiting, and headache. Most travelers taking atovaquone/proguanil do not have side effects serious enough to stop taking the drug. Other antimalarial drugs are available if you cannot tolerate atovaquone/proguanil; see your health care provider.
The following travelers should NOT take atovaquone/proguanil for prophylaxis (other antimalarial drugs are available; see your health care provider):
- children weighing less than 11 kilograms (25 pounds);
- pregnant women;
- women breast-feeding infants weighing less than 11 kilograms (25 pounds);
- patients with severe renal impairment;
- patients allergic to atovaquone or proguanil.
Doxycycline (many brand names and generics are available)
Doxycycline is related to the antibiotic tetracycline.
Directions for Use
The adult dosage is 100 mg once a day.
- Take the first dose of doxycycline 1 or 2 days before arrival in the malaria-risk area.
- Take doxycycline once a day, at the same time each day, while in the malaria-risk area.
- Take doxycycline once a day for 4 weeks after leaving the malaria-risk area.
Doxycycline Side Effects and Warnings
The most common side effects reported by travelers taking doxycycline include sun sensitivity (sunburning faster than normal). To prevent sunburn, avoid midday sun, wear a high SPF sunblock, wear long-sleeved shirts, long pants, and a hat. Doxycycline may cause nausea and stomach pain. Always take the drug on a full stomach with a full glass of liquid. Do not lie down for 1 hour after taking the drug to prevent reflux of the drug (backing up into the esophagus). Women who use doxycycline may develop a vaginal yeast infection. You may either take an over-the-counter yeast medication or have a prescription pill from your health care provider for use if vaginal itching or discharge develops. Most travelers taking doxycycline do not have side effects serious enough to stop taking the drug. (Other antimalarial drugs are available if you cannot tolerate doxycycline; see your health care provider.)
The following travelers should NOT take doxycycline (other antimalarial drugs are available; see your health care provider):
- pregnant women;
- children under the age of 8 years;
- persons allergic to doxycycline or other tetracyclines.
Mefloquine (brand name: Lariam ™ and generic)
Directions for Use
The adult dosage is 250 mg salt (one tablet) once a week.
- Take the first dose of mefloquine 1 week before arrival in the malaria-risk area.
- Take mefloquine once a week, on the same day each week, while in the malaria-risk area.
- Take mefloquine once a week for 4 weeks after leaving the malaria-risk area.
- Mefloquine should be taken on a full stomach, for example, after a meal.
Personal Lariam Warning
I have had the malaria falciparum and, due to a lack of alternatives, used Lariam for treatment. Of course the dosage for treatment is much higher and therefore the side effects are much more likely to appear. I must say, I suffered nearly every side effect mentioned on the leaflet and some it took me three months to recover completely from those. Dizziness, visual disturbance, headache and psychological effects were so strong, that I had lost orientation, balance and sense of relativity for many weeks. I was barely able to walk and generally felt more like a "vegetable" rather than a human being. Therefore, I can only advice everybody to reconsider the use of Lariam.
Mefloquine Side Effects and Warnings
The most common side effects reported by travelers taking mefloquine include headache, nausea, dizziness, difficulty sleeping, anxiety, vivid dreams, and visual disturbances.
Mefloquine has rarely been reported to cause serious side effects, such as seizures, depression, and psychosis. These serious side effects are more frequent with the higher doses used to treat malaria; fewer occurred at the weekly doses used to prevent malaria. Most travelers taking mefloquine do not have side effects serious enough to stop taking the drug. (Other antimalarial drugs are available if you cannot tolerate mefloquine; see your health care provider.)
Some travelers should NOT take mefloquine (other antimalarial drugs are available; see your health care provider):
- persons with active depression or a recent history of depression;
- persons with a history of psychosis, generalized anxiety disorder, schizophrenia, or other major psychiatric disorder;
- persons with a history of seizures (does not include the typical seizure caused by high fever in childhood);
- persons allergic to mefloquine.
Primaquine (primary prophylaxis)
In certain circumstances, when other antimalarial drugs cannot be used and in consultation with malaria experts , primaquine may be used to prevent malaria while the traveler is in the malaria-risk area (primary prophylaxis).
Directions for Use
Note :Travelers must be tested for G6PD deficiency (glucose-6-phosphate dehydrogenase) and have a documented G6PD level in the normal range before primaquine use. Primaquine can cause a fatal hemolysis (bursting of the red blood cells) in G6PD deficient persons.
The adult dosage is 52.6mg salt (30mg base primaquine)/once a day.
- Take the first dose of primaquine 1-2 days before travel to the malaria-risk
- Take primaquine once a day, at the same time each day, while in the malaria-risk area.
- Take primaquine once a day for 7 days after leaving the malaria-risk area.
Primaquine Side Effects
The most common side effects reported by travelers taking primaquine include abdominal cramps, nausea, and vomiting.
Some travelers should not take primaquine (other antimalarial drugs are available; see your health care provider):
- persons with G6PD deficiency;
- pregnant women (the fetus may be G6PD deficient, even if the mother is in the normal range);
- women breast-feeding infants unless the infant has a documented normal G6PD level;
- persons allergic to primaquine.
Chloroquine (brand name Aralen™ and generics)
Directions for Use
- The adult dosage is 500 mg (salt) chloroquine phosphate.
- Take the first dose of chloroquine 1 week before arrival in the malaria-risk area.
- Take chloroquine once a week while in the malaria-risk area.
- Take chloroquine once a week for 4 weeks after leaving the malaria-risk area.
- Chloroquine should be taken on a full stomach to lessen nausea.
Chloroquine Side Effects
The most common side effects reported by travelers taking chloroquine include nausea and vomiting, headache, dizziness, blurred vision, and itching. Chloroquine may worsen the symptoms of psoriasis. Most travelers taking chloroquine do not have side effects serious enough to stop taking the drug. Other antimalarial drugs are available if you cannot tolerate chloroquine; see your health care provider.
|Antimalarial Drugs Purchased Overseas
You should purchase your antimalarial drugs before travel. Drugs purchased overseas may not be manufactured according to United States standards and may not be effective.
They also may be dangerous, contain counterfeit medications or contaminants, or be combinations of drugs that are not safe to use. Halofantrine (marketed as Halfan) is widely used overseas to treat malaria. CDC recommends that you do NOT use halofantrine because of serious heart-related side effects, including deaths. You should avoid using antimalarial drugs that are not recommended unless you have been diagnosed with life-threatening malaria and no other options are immediately available.
|Protect Yourself from Mosquito Bites
Malaria is transmitted by the bite of an infected mosquito; these mosquitoes usually bite between dusk and dawn. To avoid being bitten, remain indoors in a screened or air-conditioned area during the peak biting period. If out-of-doors, wear long-sleeved shirts, long pants, and hats. Apply insect repellent (bug spray) to exposed skin.
Choosing an Insect Repellent
For the prevention of malaria, CDC recommends an insect repellent with DEET (N, N-diethyl-m-toluamide) as the repellent of choice. Many DEET products give long-lasting protection against the mosquitoes that transmit malaria (the anopheline mosquitoes).
A new repellent is now available in the United States that contains 7% picaridin (KBR 3023). Picaridin may be used if a DEET-containing repellent is not acceptable to the user. However, there is much less information available on how effective picaridin is at protecting against all of the types of mosquitoes that transmit malaria. Also, since the percent of picaridin is low, this repellent may only protect against bites for 1-4 hours.
At this time, use of other repellents is not recommended for the prevention of malaria because there is insufficient data on how well they protect against the mosquitoes that transmit malaria.
Precautions When Using Any Repellent
- Read and follow the directions and precautions on the product label.
- Use only when outdoors and thoroughly wash off the repellent from the skin with soap and water after coming indoors.
- Do not breathe in, swallow, or get repellent into the eyes or mouth. If using a spray product, apply to your face by spraying your hands and rubbing the product carefully over the face, avoiding eyes and mouth.
- Never use repellents on wounds or broken skin.
- Pregnant women should use insect repellent as recommended for other adults. Wash off with soap and water after coming indoors.
- Repellents may be used on infants older than 2 months of age.
- Children under 10 years old should not apply insect repellent themselves. Do not apply to young children's hands or around their eyes and mouth.
Using Repellents With DEET
- Do not get repellent containing DEET into the mouth. DEET is toxic if swallowed.
- Higher concentrations of DEET may have a longer repellent effect; however, concentrations over 50% provide no added protection.
- Timed-release DEET products, which are micro-encapsulated, may have a longer repellent effect than liquid DEET products. Re-apply as necessary, following the label directions.
Using Repellents With Picaridin
- Spray enough picaridin repellent to slightly moisten skin.
- Reapply repellents with picaridin (7% picaridin is the only product currently available in the United States) every 3 to 4 hours. Do not apply more than 3 times a day.
- Picaridin repellent causes moderate eye irritation. Avoid contact with eyes. If in eyes, wash with water for 15 to 20 minutes.
Other Recommended Anti-mosquito Measures
- Travelers should take a flying insect spray on their trip to help clear rooms of mosquitoes. The product should contain a pyrethroid insecticide; these insecticides quickly kill flying insects, including mosquitoes.
- Travelers not staying in well-screened or air-conditioned rooms should sleep under bed nets (mosquito nets), preferably nets treated with the insecticide permethrin. Permethrin both repels and kills mosquitoes as well as other biting insects and ticks. In the United States, permethrin is available as a spray or a liquid (e.g. Permanone™). Pre-treated nets, permethrin or another insecticide deltamethrin, are available overseas.
For information on ordering insecticide-treated bed nets: www.travmed.com , phone 1-800-872-8633, fax: 413-584-6656; or www.travelhealthhelp.com , phone 1-866-621-6260.
- Protect infants (especially infants under 2 months of age not wearing insect repellent) by using a carrier draped with mosquito netting with an elastic edge for a tight fit.
- Clothing, shoes, and camping gear, can also be treated with permethrin. Treated clothing can be repeatedly washed and still repel insects. Some commercial products (clothing) are now available in the United States that have been pretreated with permethrin.
Connor BA. Expert recommendations for antimalarial prophylaxis. J Travel Med . 2001;8 Suppl 3: S57-64.
Magill AJ. The prevention of malaria. Prim Care . 2002;29:815-42.
Newman RD, Parise ME, Barber AM, et al. Malaria-related deaths among U.S. travelers, 1963-2001. Ann Intern Med . 2004;141:547-55.
Parise ME, Lewis LS. Severe malaria: North American perspective. In: Feldman C, Sarosi GA, editors. Tropical and parasitic infections in the ICU . Springer Science+Business Media, Inc, 2005.
Powell VI, Grima K. Exchange transfusion for malaria and Babesia infection. Transfus Med Rev . 2002;16:239-50.
Re VL 3rd, Gluckman SJ. Prevention of malaria in travelers. Am Fam Physician . 2003;68:509-14.
Schlagenhauf-Lawlor P. Travelers' malaria . Hamilton, Ontario: BC Decker; 2001.
Schwartz E, Parise M, Kozarsky P, et al. Delayed onset of malaria—implications for chemoprophylaxis in travelers. N Engl J Med . 2003;349:1510-6.
Shah S, Filler S, Causer LM, et al. Malaria surveillance—United States, 2002. MMWR Surveill Summ . 2004;53:21-34.
- Monica Parise, Ann Barber, and Sonja Mali | http://bushdrums.com/index.php/travelreports/item/3228-malaria | 13 |
14 | On this day in 1775, Congress issues $2 million in bills of credit.
By the spring of 1775, colonial leaders, concerned by British martial law in Boston and increasing constraints on trade, had led their forces in battle against the crown. But, the American revolutionaries encountered a small problem on their way to the front: they lacked the funds necessary to wage a prolonged war.
Though hardly the colonies' first dalliance with paper notes--the Massachusetts Bay colony had issued its own bills in 1690--the large-scale distribution of the revolutionary currency was fairly new ground for America. Moreover, the bills, known at the time as "Continentals," notably lacked the then de rigueur rendering of the British king. Instead, some of the notes featured likenesses of Revolutionary soldiers and the inscription "The United Colonies." But, whatever their novelty, the Continentals proved to be a poor economic instrument: backed by nothing more than the promise of "future tax revenues" and prone to rampant inflation, the notes ultimately had little fiscal value. As George Washington noted at the time, "A wagonload of currency will hardly purchase a wagonload of provisions." Thus, the Continental failed and left the young nation saddled with a hefty war debt.
A deep economic depression followed the Treaty of Paris in 1783. Unstable currency and unstable debts caused a Continental Army veteran, Daniel Shays, to lead a rebellion in western Massachusetts during the winter of 1787. Fear of economic chaos played a significant role in the decision to abandon the Articles of Confederation for the more powerful, centralized government created by the federal Constitution. During George Washington's presidency, Alexander Hamilton struggled to create financial institutions capable of stabilizing the new nation's economy.
Duly frustrated by the experience with Continental currency, America resisted the urge to again issue new paper notes until the dawn of the Civil War. | http://www.history.com/this-day-in-history/congress-issues-continental-currency | 13 |
15 | Potential Impacts of Sea Level Rise on Mangroves
Overview of Mangrove Ecosystem
Climatic factors such as temperature and moisture affect mangrove distribution. Mangroves are distributed latitudinally within the tropics and subtropics, reaching their maximum development between 25°N and 25°S. Temperature controls latitudinal distributions of mangrove; perennial mangroves generally cannot survive freezing temperatures. The richest mangrove communities occur in areas where the water temperature is greater than 24ºC in the warmest month. The most recent estimates suggest that mangroves presently occupy about 14,653,000 ha of tropical and subtropical coastline (McLeod & Salm, 2006) (Fig.1).
The cumulative effects of natural and anthropogenic pressures make mangrove wetlands one of the most threatened natural communities worldwide. Roughly 50% of the global area has been lost since 1900 and 35% of the global area has been lost in the past two decades, due primarily to human activities such as conversion for aquaculture. Mangroves are declining in area worldwide. The global average annual rate of mangrove loss is about 2.1%, exceeding the rate of loss of tropical rainforests (0.8%) (Gilman et al., 2006a).
Mangrove Ecosystem Values
The mangrove ecosystem provides income from the collection of the mollusks, crustaceans, and fish that live there. Mangroves are harvested for fuelwood, charcoal, timber, and wood chips. Services include the role of mangroves as nurseries for economically important fisheries, especially for shrimp. Mangroves also provide habitats for a large number of molluscs, crustaceans, birds, insects, monkeys, and reptiles. Other mangrove services include the filtering and trapping of pollutants and the stabilization of coastal land by trapping sediment and protection against storm damage. Also, mangroves provide recreational, tourism, educational, and research opportunities, such as boardwalks and boat tours, and are important for research and education.
Benefits as Measured by Market Prices
The annual economic values of mangroves, estimated by the cost of the products and services they provide, have been estimated to be USD 200,000 - 900,000 per hectare (Gilman et al., 2006a). However, the location and values of the beneficiaries can result in substantial variation in mangrove economic value. For instance, mangroves fronting a highly developed coastline or located near major tourist destinations may have a higher economic value than mangroves in less developed areas with little or no tourism sector development (Gilman et al., 2006a).
Potential Impacts of Sea Level Rise
Global and pacific Projections for Sea Level Rise
Global mean sea level is projected to rise by 0.09 to 0.88 m between 1990 and 2100 based on the Intergovernmental Panel on Climate Change’s full range of 35 climate projection scenarios. The projected short-term sea level rise from 1990 to 2100 is due primarily to thermal expansion of seawater and transfer of ice from glaciers and ice caps to water in the oceans, which both change the volume of water in the world oceans (Fig. 3).
The level of the sea at the shoreline is determined by many factors in the global environment that operate on a great range of time scales, from hours (tidal) to millions of years (ocean basin changes due to tectonics and sedimentation). On the time scale of decades to centuries, some of the largest influences on the average levels of the sea are linked to climate and climate change processes (Fig.4).
Mangrove Responses to Changing Sea Level
Sea-level rise is the greatest climate change challenge that mangrove ecosystems will face (McLeod & Salm, 2006). Mangroves can adapt to sea-level rise if it occurs slowly enough, if adequate expansion space exists, and if other environmental conditions are met .
There are three general scenarios for mangrove response to relative sea level rise, given a landscape-level scale and time period of decades or longer (Fig. 5).
No change in relative sea level: When sea-level is not changing relative to the mangrove surface, mangrove elevation; salinity; frequency, period, and depth of inundation; and other factors that determine if a mangrove community can persist at a location will remain relatively constant and the mangrove margins will remain in the same location (Fig. 5A) (Gilman et al., 2006a <ref="G"/>).
Relative sea level lowering: When sea level is dropping relative to the mangrove surface, this forces the mangrove seaward and landward boundaries to migrate seaward (Fig. 5B) and depending on the topography, the mangrove may also expand laterally; and
Relative sea level rising: If sea level is rising relative to the mangrove surface, the mangrove’s seaward and landward margins retreat landward, where unobstructed, as mangrove species zones migrate inland in order to maintain their preferred environmental conditions, such as period, frequency and depth of inundation and salinity (Fig. 5C). Depending on the ability of individual true mangrove species to colonize new habitat at a rate that keeps pace with the rate of relative sea level rise, the slope of adjacent land, and the presence of obstacles to landward migration of the landward boundary of the mangrove, such as seawalls and other shoreline protection structures, some sites will revert to a narrow mangrove fringe or experience extirpation of the mangrove community (Gilman et al., 2006a <ref="G"/>) (Fig. 5D). The sediment composition of the upland habitat where the mangrove is migrating may also influence the migration rate (Gilman et al., 2007).
Review of the stratigraphic record of mangrove ecosystems during sea-level changes of the Holocene shows that low islands such as Grand Cayman, Bermuda, and Tongatapu will be particularly vulnerable to the loss of mangrove ecosystems during the rises of relative sea-level projected for the next 50 years. Mangrove ecosystems in these locations could keep up with a sea-level rise of up to 8-9 cm/100 years, but at rates of over 12 cm/100 years could not persist. This is due to low rates of sediment accumulation, with limited sources from outside the mangrove zone, such as from rivers or soil erosion sources. Other factors contributing to mangrove persistence are the primary production rate of forests, shoreline erosion due to deeper and more turbulent water and the frequency and intensity of tropical storms .
On high islands such as Viti Levu and Lakeba, Fiji, and Kosrae, Caroline Islands sediment supply has been accelerated by anthropogenically enhanced rates of soil erosion that the dominant process affecting mangroves of high islands and continental coasts may be input of terrestrial sediment such that the effects of sea level rise are lessened. Because of the allochthonous component in these sediments, mangrove substrates are accreting at a faster rate than the peats of low limestone islands, up to 25 cm/100 years .
The nature of the problems produced by sea-level rise varies between and within regions due to a range of natural, socioeconomic, institutional and cultural factors. It is important to emphasize that there are no winners given sea-level rise, rather there are small losers and big losers. The Pacific Small Islands appear to be highly vulnerable to sea-level rise while Europe is less vulnerable than the other regions (Robert & Mimura, 1998).
To assess mangrove vulnerability to sea level rise and other climate change effects and to plan for adaptation, island countries and territories need to build their technical and institutional capacity to:
(1) Determine trends in relative mean sea level and trends in the frequency and elevations of extreme high water events, and incorporate this information into land-use planning processes.
(2) Measure trends in the change in mangrove surface elevation to determine how sea level is changing relative to the mangrove surface.
(3) Acquire and analyze historical remotely sensed imagery to observe historical trends in changes in position of mangrove margins.
(4) Produce topographic maps and maps of locations of development and roads for land parcels adjacent to and containing mangroves, and establish or augment GIS programs. The World Bank-funded Infrastructure Asset Management Project in progress in Samoa might serve as a suitable model.
(5) Develop standardized mangrove monitoring programs as part of a regional mangrove-monitoring network. Provide training opportunities for in-country personnel to manage the mangrove-monitoring program, coordinate with a regional hub, and conduct monitoring techniques. Monitoring methods would include periodic delineation of mangrove margins.
(6) Assess efficacy of mangrove management frameworks and provide assistance to manage coastal activities to prevent unsustainable effects on mangroves and other coastal habitats, in part, to increase resilience to climate change effects, and plan for any landward mangrove migration in response to relative sea level rise.
(7) Augment regional capacity to rehabilitate mangroves. Establishing a regional mangrove monitoring network may enable many of the identified capacity building priorities to be fulfilled, and should be one of the highest regional priorities. Participating countries and territories could share technical and financial resources to maximize monitoring and conservation benefits through economy of scale. Assessing the efficacy of management frameworks to avoid and minimize adverse affects on mangroves and other valuable coastal ecosystems and plan for any landward mangrove migration is also critical. Ensuring that management frameworks are capable of eliminating and minimizing stresses that degrade mangroves is necessary to provide for mangrove resilience to anticipated stresses from sea level and other climate change effects. And managers will need the institutional capacity to plan for site-specific mangrove response to climate change effects, such as instituting setbacks from mangroves for new development for appropriate sections of coastline. However, management frameworks will only be effective if local communities and management authorities recognize the value of mangrove conservation. It is therefore also a priority to continually develop and augment a mangrove conservation ethic.
The value of wetlands conservation is often underestimated, especially in less developed countries with high population growth and substantial development pressure, where short-term economic gains that result from activities that adversely affect wetlands are often preferred over the less-tangible long-term benefits that accrue from sustainably using wetlands.
Local communities and leaders must recognize the long-term benefits of mangrove conservation to reverse historical trends in loss of mangrove area, maximize mangrove resilience to climate change, and where sea level is projected to rise relative to mangrove surfaces, enable unobstructed natural landward migration wherever possible. Education and outreach programs are an investment to bring about changes in behavior and attitudes by better informing communities of the value of mangroves and other ecosystems. This increase in public knowledge of the importance of mangroves can provide the local community with information to make informed decisions about the use of their mangrove resources, and can result in grassroots support and increased political will for measures to conserve and sustainably manage mangroves (Gilman et al., 2006b ).
- ↑ 1,0 1,1 McLeod, E. and R.V. Salm. 2006. Managing Mangroves for Resilience to Climate Change. IUCN, Gland, Switzerland. 64p.
- ↑ 2,0 2,1 2,2 Gilman, E., Van Lavieren, H., Ellison, J., Jungblut, V., Wilson, L., Areki, F., Brighouse, G., Bungitak, J., Dus, E., Henry, M., Sauni, I. Jr., Kilman, M., Matthews, E., Teariki-Ruatu, N., Tukia, S. and K. Yuknavage. 2006a. Pacific Island Mangroves in a Changing Climate and Rising Sea. UNEP Regional Seas Reports and Studies No. 179. United Nations Environment Programme, Regional Seas Programme, Nairobi, KENYA.
- ↑ 3,0 3,1 3,2 Bibliography of sea-level change and mangroves. Compiled by Andrea Schwarzbach.
- ↑ Gilman, E. (ed.) 2006. Proceedings of the Symposium on Mangrove Responses to Relative Sea-Level Rise and Other Climate Change Effects, 13 July 2006. Catchments to Coast. The Society of Wetland Scientists 27th International Conference, 9-14 July 2006, Cairns Convention Centre, Cairns, Australia. Published by the Western Pacific Regional Fishery Management Council, Honolulu, USA.
- ↑ Gilman, E.H., Ellison, J., and Coleman, R. 2007. Assessment of Mangrove Response to Projected Relative Sea-Level Rise and Recent Historical Reconstruction of Shoreline Position. Environmental Monitoring and Assessment, Vol. 124 (1-3): 105-130
- ↑ Robert J. N. and N. Mimura. 1998. Regional issues raised by sea-level rise and their policy implications. Climate Research. Vol.11:5-18.
- ↑ Gilman, E.H., Ellison, J., Jungblut, V., Van Lavieren, H., Wilson, L., Areki, F., Brighouse, G., Bungitak, J., Dus, E., Henry, M., Kilman, M., Matthews, E., Sauni, I. Jr., Teariki-Ruatu, N., Tukia, S., and K. Yuknavage. 2006b. Adapting to Pacific Island mangrove responses to sea level rise and climate change. Climate Research, 32(3): 161-176.
Please note that others may also have edited the contents of this article. | http://www.vliz.be/wiki/Potential_Impacts_of_Sea_Level_Rise_on_Mangroves | 13 |
25 | Slavery in the United States began soon after English colonists first settled Virginia in 1607 and lasted until the passage of the Thirteenth Amendment to the United States Constitution in 1865. Before the widespread establishment of chattel slavery, much labor was organized under a system of bonded labor known as indentured servitude. This typically lasted for several years for white and black alike, and it was a means of using labor to pay the costs of transporting people to the colonies. By the 18th century, court rulings established the racial basis of the American incarnation of slavery to apply chiefly to Black Africans and people of African descent, and occasionally to Native Americans. In part because of the Southern colonies' devotion of resources to tobacco culture, which was labor intensive, by the end of the 17th century they had a higher number and proportion of slaves than in the north.
From 1654 until 1865, slavery for life was legal within the boundaries of the present United States. Most slaves were black and were held by whites, although some Native Americans and free blacks also held slaves. The majority of slaveholding was in the southern United States where most slaves were engaged in an efficient machine-like gang system of agriculture. According to the 1860 U.S. census, nearly four million slaves were held in a total population of just over 12 million in the 15 states in which slavery was legal. Of all 1,515,605 families in the 15 slave states, 393,967 held slaves (roughly one in four), amounting to 8% of all American families. Most slaveholding households, however, had only a few slaves. The majority of slaves was held by planters, defined by historians as those who held 20 or more slaves. The planters achieved wealth and social and political power. Ninety-five percent of black people lived in the South, comprising one-third of the population there, as opposed to 2% of the population of the North.
The wealth of the United States in the first half of the 19th century was greatly enhanced by the labor of African Americans. But with the Union victory in the American Civil War, the slave-labor system was abolished in the South. This contributed to the decline of the postbellum Southern economy, though the South also faced significant new competition from foreign cotton producers such as India and Egypt, and the cotton gin had made cotton production less labor-intensive in any case. Northern industry, which had expanded rapidly before and during the war, surged even further ahead of the South's agricultural economy. Industrialists from northeastern states came to dominate many aspects of the nation's life, including social and some aspects of political affairs. The planter class of the South lost power temporarily. The rapid economic development following the Civil War accelerated the development of the modern U.S. industrial economy.
Twelve million black Africans were shipped to the Americas from the 16th to the 19th centuries. Of these, an estimated 645,000 (5.4% of the total) were brought to what is now the United States. The overwhelming majority were shipped to Brazil The slave population in the United States had grown to four million by the 1860 Census.
In addition to African slaves, Europeans, mostly Irish, Scottish, English, and Germans, were brought over in substantial numbers as indentured servants, particularly in the British Thirteen Colonies. Over half of all white immigrants to the English colonies of North America during the 17th and 18th centuries consisted of indentured servants. The white citizens of Virginia, who had arrived from Britain, decided to treat the first Africans in Virginia as indentured servants. As with European indentured servants, the Africans were freed after a stated period and given the use of land and supplies by their former owners, and at least one African American, Anthony Johnson, eventually became a landowner on the Eastern Shore and a slave-owner. The major problem with indentured servants was that, in time, they would be freed, but they were unlikely to become prosperous. The best lands in the tidewater regions were already in the hands of wealthy plantation families by 1650, and the former servants became an underclass. Bacon's Rebellion showed that the poor laborers and farmers could prove a dangerous element to the wealthy landowners. By switching to pure chattel slavery, new white laborers and small farmers were mostly limited to those who could afford to immigrate and support themselves.
The transformation from indentured servitude to racial slavery happened gradually. There were no laws regarding slavery early in Virginia's history. However, by 1640, the Virginia courts had sentenced at least one black servant to slavery.
In 1654, John Casor, a black man, became the first legally-recognized slave in the area to become the United States. A court in Northampton County ruled against Casor, declaring him property for life, "owned" by the black colonist Anthony Johnson. Since persons with African origins were not English citizens by birth, they were not necessarily covered by English Common Law.
The Virginia Slave codes of 1705 made clear the status of slaves. During the British colonial period, every colony had slavery. Those in the north were primarily house servants. Early on, slaves in the South worked on farms and plantations growing indigo, rice, and tobacco; cotton became a major crop after the 1790s. In South Carolina in 1720 about 65% of the population consisted of slaves. Slaves were used by rich farmers and plantation owners with commercial export operations. Backwoods subsistence farmers seldom owned slaves.
Some of the British colonies attempted to abolish the international slave trade, fearing that the importation of new Africans would be disruptive. Virginia bills to that effect were vetoed by the British Privy Council; Rhode Island forbade the import of slaves in 1774. All of the colonies except Georgia had banned or limited the African slave trade by 1786; Georgia did so in 1798 - although some of these laws were later repealed.
The British West Africa Squadron's slave trade suppression activities were assisted by forces from the United States Navy, starting in 1820 with the USS Cyane. Initially, this consisted of a few ships, but relationship was eventually formalised by the Webster-Ashburton Treaty of 1842 into the Africa Squadron.
Although complete statistics are lacking, it is estimated that 1,000,000 slaves moved west from the Old South between 1790 and 1860. Most of the slaves were moved from Maryland, Virginia, and the Carolinas. Originally the points of destination were Kentucky and Tennessee, but after 1810 Georgia, Alabama, Mississippi, Louisiana and Texas received the most. In the 1830s, almost 300,000 were transported, with Alabama and Mississippi receiving 100,000 each. Every decade between 1810 and 1860 had at least 100,000 slaves moved from their state of origin. In the final decade before the Civil War, 250,000 were moved. Michael Tadman, in a 1989 book Speculators and Slaves: Masters, Traders, and Slaves in the Old South, indicates that 60-70% of interregional migrations were the result of the sale of slaves. In 1820 a child in the Upper South had a 30% chance to be sold south by 1860.
Slave traders were responsible for the majority of the slaves that moved west. Only a minority moved with their families and existing owner. Slave traders had little interest in purchasing or transporting intact slave families, although in the interest of creating a "self-reproducing labor force" equal numbers of men and women were transported. Berlin wrote, "The internal slave trade became the largest enterprise in the South outside the plantation itself, and probably the most advanced in its employment of modern transportation, finance, and publicity." The slave trade industry developed its own unique language with terms such as "prime hands, bucks, breeding wenches, and fancy girls" coming into common use. The expansion of the interstate slave trade contributed to the "economic revival of once depressed seaboard states" as demand accelerated the value of the slaves that were subject to sale.
Some traders moved their "chattels" by sea, with Norfolk to New Orleans being the most common route, but most slaves were forced to walk. Regular migration routes were established and were served by a network of slave pens, yards, and warehouses needed as temporary housing for the slaves. As the trek advanced, some slaves were sold and new ones purchased. Berlin concluded, "In all, the slave trade, with its hubs and regional centers, its spurs and circuits, reached into every cranny of southern society. Few southerners, black or white, were untouched.
The death rate for the slaves on their way to their new destination across the American South was much less than that of the captives on their way across the Atlantic Ocean, but they were still higher than the normal death rate. Berlin summarizes the experience:
Once the trip was ended, slaves faced a life on the frontier significantly different from their experiences back east. Clearing trees and starting crops on virgin fields was harsh and backbreaking work. A combination of inadequate nutrition, bad water, and exhaustion from both the journey and the work weakened the newly arrived slaves and produced casualties. The preferred locations of the new plantations in river bottoms with mosquitoes and other environmental challenges threatened the survival of slaves, who had acquired only limited immunities in their previous homes. The death rate was such that, in the first few years of hewing a plantation out of the wilderness, some planters preferred whenever possible to use rented slaves rather than their own.
The harsh conditions on the frontier increased slave resistance and led to much more reliance on violence by the owners and overseers. Many of the slaves were new to cotton fields and unaccustomed to the "sunrise-to-sunset gang labor" required by their new life. Slaves were driven much harder than when they were involved in growing tobacco or wheat back east. Slaves also had less time and opportunity to boost the quality of their lifestyle by raising their own livestock or tending vegetable gardens, for either their own consumption or trade, as they could in the eastern south.
In Louisiana it was sugar, rather than cotton, that was the main crop. Between 1810 and 1830 the number of slaves increased from under 10,000 to over 42,000. New Orleans became nationally important as a slave port and by the 1840s had the largest slave market in the country. Dealing with sugar cane was even more physically demanding than growing cotton, and the preference was for young males, who represented two-thirds of the slave purchases. The largely young, unmarried male slave force made the reliance on violence by the owners “especially savage.”
Historian Kenneth M. Stampp describes the role of coercion in slavery, “Without the power to punish, which the state conferred upon the master, bondage could not have existed. By comparison, all other techniques of control were of secondary importance.” Stampp further notes that while rewards sometimes led slaves to perform adequately, most agreed with an Arkansas slaveholder, who wrote:
According to both the Pulitzer Prize-winning historian David Brion Davis and Marxist historian Eugene Genovese, treatment of slaves was both harsh and inhumane. Whether laboring or walking about in public, people living as slaves were regulated by legally authorized violence. Davis makes the point that, while some aspects of slavery took on a "welfare capitalist" look,:
On large plantations, slave overseers were authorized to whip and brutalize non-compliant slaves. According to an account by a plantation overseer to a visitor, "'some negroes are determined never to let a white man whip them and will resist you, when you attempt it; of course you must kill them in that case" Laws were passed that fined owners for not punishing recaptured runaway slaves. Slave codes authorized, indemnified or even required the use of violence, and were denounced by abolitionists for their brutality. Both slaves and free blacks were regulated by the Black Codes and had their movements monitored by slave patrols conscripted from the white population which were allowed to use summary punishment against escapees, sometimes maiming or killing them. In addition to physical abuse and murder, slaves were at constant risk of losing members of their families if their owners decided to trade them for profit, punishment, or to pay debts. A few slaves retaliated by murdering owners and overseers, burning barns, killing horses, or staging work slowdowns. Stampp, without contesting Genovese's assertions concerning the violence and sexual exploitation faced by slaves, does question the appropriateness of a Marxian approach in analyzing the owner-slave relationship.
Genovese claims that because the slaves were the legal property of their owners, it was not unusual for enslaved black women to be raped by their owners, members of their owner's families, or their owner's friends. Children who resulted from such rapes were slaves as well because they took the status of their mothers, unless freed by the slaveholder. Nell Irwin Painter and other historians have also documented that Southern history went "across the color line". Contemporary accounts by Mary Chesnut and Fanny Kemble, both married in the planter class, as well as accounts by former slaves gathered under the Works Progress Administration (WPA), all attested to the abuse of women slaves by white men of the owning and overseer class.
However, the Nobel economist Robert Fogel controversially describes the belief that slave-breeding and sexual exploitation destroyed the black family as a myth. He argues that the family was the basic unit of social organization under slavery; it was to the economic interest of planters to encourage the stability of slave families, and most of them did so. Most slave sales were either of whole families or of individuals who were at an age when it would have been normal for them to have left the family. However, eye-witness testimony from slaves, such as Frederick Douglass, does not agree with this account. Frederick Douglass, who grew up as a slave in Maryland, reported the systematic separation of slave families. He also reports the widespread rape of slave women, in order to boost slave numbers.
According to Genovese, slaves were fed, clothed, housed and provided medical care in the most minimal manner. It was common to pay small bonuses during the Christmas season, and some slave owners permitted their slaves to keep earnings and gambling profits. (One slave, Denmark Vesey, is known to have won a lottery and bought his freedom.) In many households, treatment of slaves varied with the slave's skin color. Darker-skinned slaves worked in the fields, while lighter-skinned house servants had comparatively better clothing, food and housing.
As in President Thomas Jefferson's household, this was not merely an issue of skin color. Sometimes planters used light-skinned slaves as house servants because they were relatives. Several of Jefferson's household slaves were children of his father-in-law and an enslaved woman, who were brought to the marriage by Jefferson's wife.
However, Fogel argues that the material conditions of the lives of slaves compared favorably with those of free industrial workers. They were not good by modern standards, but this fact emphasizes the hard lot of all workers, free or slave, during the first half of the 19th century. Over the course of his lifetime, the typical slave field hand received about 90% of the income he produced.
In a survey, 58% of historians and 42% of economists disagreed with the proposition that the material condition of slaves compared favorably with those of free industrial workers.
Slaves were considered legal non-persons except if they committed crimes. An Alabama court asserted that slaves "are rational beings, they are capable of committing crimes; and in reference to acts which are crimes, are regarded as persons. Because they are slaves, they are incapable of performing civil acts, and, in reference to all such, they are things, not persons.
In 1811, Arthur William Hodge was the first slave owner executed for the murder of a slave in the British West Indies. He though was not, as some have claimed, the first white person to have been lawfully executed for the killing of a slave. Records indicate at least two earlier incidents. On November 23, 1739, in Williamsburg, Virginia, two white men, Charles Quin and David White, were hanged for the murder of another white man's black slave; and on April 21, 1775, the Fredericksburg newspaper, the Virginia Gazette reported that a white man William Pitman had been hanged for the murder of his own black slave.''
In 1837, an Antislavery Convention of American Women met in New York City with both black and white women participating. Lucretia Mott and Elizabeth Cady Stanton had first met at the convention and realized the need for a separate women's rights movement. At the London gathering Stanton also met other women delegates such as Emily Winslow, Abby Southwick, Elizabeth Neal, Mary Grew, Abby Kimber, as well as many other women. However, during the Massachusetts Anti-slavery Society meetings, which Stanton and Winslow attended, the hosts refused to seat the women delegates. This resulted in a convention of their own to form a "society to advocate the rights of women". In 1848 at Seneca Falls, New York, Stanton and Winslow launched the women's rights movement, becoming one of the most diverse and social forces in American life.
Throughout the first half of the 19th century, a movement to end slavery grew in strength throughout the United States. This struggle took place amid strong support for slavery among white Southerners, who profited greatly from the system of enslaved labor. These slave owners began to refer to slavery as the "peculiar institution" in a defensive attempt to differentiate it from other examples of forced labor.
After 1830, a religious movement led by William Lloyd Garrison declared slavery to be a personal sin and demanded the owners repent immediately and start the process of emancipation. The movement was highly controversial and was a factor in causing the American Civil War.
Very few abolitionists, such as John Brown, favored the use of armed force to foment uprisings among the slaves; others tried to use the legal system.
Influential leaders of the abolition movement (1810-60) included:
Slave uprisings that used armed force (1700 - 1859) include:
The economic value of plantation slavery was magnified in 1793 with the invention of the cotton gin by Eli Whitney, a device designed to separate cotton fibers from seedpods and the sometimes sticky seeds. The invention revolutionized the cotton industry by increasing fifty-fold the quantity of cotton that could be processed in a day. The result was the explosive growth of the cotton industry and greatly increased the demand for slave labor in the South.
At the same time, the northern states banned slavery, though, as Alexis de Toqueville noted in Democracy in America (1835), the prohibition did not always mean that the slaves were freed. Toqueville noted that as Northern states provided for gradual emancipation, they generally outlawed the sale of slaves within the state. This meant that the only way to sell slaves before they were freed was to move them South. Toqueville does not document that such transfers actually occurred much. In fact, the emancipation of slaves in the North led to the growth in the population of northern free blacks, from several hundreds in the 1770s to nearly 50,000 by 1810.
Just as demand for slaves was increasing, the supply was restricted. The United States Constitution, adopted in 1787, prevented Congress from banning the importation of slaves until 1808. On January 1, 1808, Congress banned further imports. Any new slaves would have to be descendants of ones currently in the United States. However, the internal American slave trade and the involvement in the international slave trade or the outfitting of ships for that trade by U.S. citizens were not banned. Though there were certainly violations of this law, slavery in America became, more or less, self-sustaining.
With the movement in Virginia and the Carolinas away from tobacco cultivation and toward mixed agriculture, which was less labor intensive, planters in those states had excess slave labor. They hired out some slaves for occasional labor, but planters also began to sell enslaved African Americans to traders who took them to markets in the Deep South for their expanding plantations. The internal slave trade and forced migration of enslaved African Americans continued for another half-century. Tens of thousands of slaves were transported from the Upper South, including Kentucky and Tennessee which became slave-selling states in these decades, to the Deep South. Thousands of African American families were broken up in the sales, which first concentrated on male laborers. The scale of the internal slave trade contributed substantially to the wealth of the Deep South. In 1840, New Orleans—which had the largest slave market and important shipping—was the third largest city in the country and the wealthiest.
Because of the three-fifths compromise in the U.S. Constitution, slaveholders exerted their power through the Federal Government and passed Federal fugitive slave laws. Refugees from slavery fled the South across the Ohio River and other parts of the Mason-Dixon Line dividing North from South, to the North via the Underground Railroad. The physical presence of African Americans in Cincinnati, Oberlin, and other Northern towns agitated some white Northerners, though others helped hide former slaves from their former owners, and others helped them reach freedom in Canada. After 1854, Republicans fumed that the Slave Power, especially the pro-slavery Democratic Party, controlled two of the three branches of the Federal government.
Most Northeastern states became free states through local emancipation. The settlement of the Midwestern states after the Revolution led to their decisions in the 1820s not to allow slavery. A Northern block of free states united into one contiguous geographic area which shared an anti-slavery culture. The boundary was the Mason-Dixon Line (between slave-state Maryland and free-state Pennsylvania) and the Ohio River.
|# Slaves|| # Free|
| % free|
| % black|
In 1831, a bloody slave rebellion took place in Southampton County, Virginia. A slave named Nat Turner, who was able to read and write and had "visions", started what became known as Nat Turner's Rebellion or the Southampton Insurrection. With the goal of freeing himself and others, Turner and his followers killed approximately fifty men, women and children, but they were eventually subdued by the militia.
Nat Turner and his followers were hanged, and Turner's body was flayed. The militia also killed more than a hundred slaves who had not been involved in the rebellion. Across the South, harsh new laws were enacted in the aftermath of the 1831 Turner Rebellion to curtail the already limited rights of African Americans. Typical was the Virginia law against educating slaves, free blacks and children of whites and blacks. These laws were often defied by individuals, among whom was noted future Confederate General Stonewall Jackson.
The 1857 Dred Scott decision, decided 7-2, held that a slave did not become free when taken into a free state; Congress could not bar slavery from a territory; and blacks could not be citizens. Furthermore, a state could not bar slaveowners from bringing slaves into that state. This decision, seen as unjust by many Republicans including Abraham Lincoln, was also seen as proof that the Slave Power had seized control of the Supreme Court. The decision, written by Chief Justice Roger B. Taney, barred slaves and their descendants from citizenship. The decision enraged abolitionists and encouraged slave owners, helping to push the country towards civil war.
Lincoln, the Republican, won with a plurality of popular votes and a majority of electoral votes. Lincoln, however, did not appear on the ballots of ten southern states: thus his election necessarily split the nation along sectional lines. Many slave owners in the South feared that the real intent of the Republicans was the abolition of slavery in states where it already existed, and that the sudden emancipation of four million slaves would be problematic for the slave owners and for the economy that drew its greatest profits from the labor of people who were not paid.
They also argued that banning slavery in new states would upset what they saw as a delicate balance of free states and slave states. They feared that ending this balance could lead to the domination of the industrial North with its preference for high tariffs on imported goods. The combination of these factors led the South to secede from the Union, and thus began the American Civil War. Northern leaders had viewed the slavery interests as a threat politically, and with secession, they viewed the prospect of a new southern nation, the Confederate States of America, with control over the Mississippi River and the West, as politically and militarily unacceptable.
Lincoln's Emancipation Proclamation of January 1, 1863 was a powerful move that promised freedom for slaves in the Confederacy as soon as the Union armies reached them, and authorized the enlistment of African Americans in the Union Army. The Emancipation Proclamation did not free slaves in the Union-allied slave-holding states that bordered the Confederacy. Since the Confederate States did not recognize the authority of President Lincoln, and the proclamation did not apply in the border states, at first the proclamation freed only slaves who had escaped behind Union lines. Still, the proclamation made the abolition of slavery an official war goal that was implemented as the Union took territory from the Confederacy. According to the Census of 1860, this policy would free nearly four million slaves, or over 12% of the total population of the United States.
The Arizona Organic Act abolished slavery on February 24, 1863 in the newly formed Arizona Territory. Tennessee and all of the border states (except Kentucky) abolished slavery by early 1865. Thousands of slaves were freed by the operation of the Emancipation Proclamation as Union armies marched across the South. Emancipation as a reality came to the remaining southern slaves after the surrender of all Confederate troops in spring 1865.
At the beginning of the war some Union commanders thought they were supposed to return escaped slaves to their masters. By 1862, when it became clear that this would be a long war, the question of what to do about slavery became more general. The Southern economy and military effort depended on slave labor. It began to seem unreasonable to protect slavery while blockading Southern commerce and destroying Southern production. As one Congressman put it, the slaves "…cannot be neutral. As laborers, if not as soldiers, they will be allies of the rebels, or of the Union. The same Congressman—and his fellow Radical Republicans—put pressure on Lincoln to rapidly emancipate the slaves, whereas moderate Republicans came to accept gradual, compensated emancipation and colonization. Copperheads, the border states and War Democrats opposed emancipation, although the border states and War Democrats eventually accepted it as part of total war needed to save the Union.
In 1861, Lincoln expressed the fear that premature attempts at emancipation would mean the loss of the border states, and that "to lose Kentucky is nearly the same as to lose the whole game. At first, Lincoln reversed attempts at emancipation by Secretary of War Simon Cameron and Generals John C. Fremont (in Missouri) and David Hunter (in South Carolina, Georgia and Florida) in order to keep the loyalty of the border states and the War Democrats.
Lincoln mentioned his Emancipation Proclamation to members of his cabinet on July 21, 1862. Secretary of State William H. Seward told Lincoln to wait for a victory before issuing the proclamation, as to do otherwise would seem like "our last shriek on the retreat". In September 1862 the Battle of Antietam provided this opportunity, and the subsequent War Governors' Conference added support for the proclamation. Lincoln had already published a letter encouraging the border states especially to accept emancipation as necessary to save the Union. Lincoln later said that slavery was "somehow the cause of the war". Lincoln issued his preliminary Emancipation Proclamation on September 22, 1862, and said that a final proclamation would be issued if his gradual plan based on compensated emancipation and voluntary colonization was rejected. Only the District of Columbia accepted Lincoln's gradual plan, and Lincoln issued his final Emancipation Proclamation on January 1, 1863. In his letter to Hodges, Lincoln explained his belief that "If slavery is not wrong, nothing is wrong … And yet I have never understood that the Presidency conferred upon me an unrestricted right to act officially upon this judgment and feeling ... I claim not to have controlled events, but confess plainly that events have controlled me.
Since the Emancipation Proclamation was based on the President's war powers, it only included territory held by Confederates at the time. However, the Proclamation became a symbol of the Union's growing commitment to add emancipation to the Union's definition of liberty. Lincoln also played a leading role in getting Congress to vote for the Thirteenth Amendment, which made emancipation universal and permanent.
Enslaved African Americans did not wait for Lincoln's action before escaping and seeking freedom behind Union lines. From early years of the war, hundreds of thousands of African Americans escaped to Union lines, especially in occupied areas like Norfolk and the Hampton Roads region in 1862, Tennessee from 1862 on, the line of Sherman's march, etc. So many African Americans fled to Union lines that commanders created camps and schools for them, where both adults and children learned to read and write. The American Missionary Association entered the war effort by sending teachers south to such contraband camps, for instance establishing schools in Norfolk and on nearby plantations. In addition, nearly 200,000 African-American men served with distinction as soldiers and sailors with Union troops. Most of those were escaped slaves. Confederates enslaved captured black Union soldiers, and black soldiers especially were shot when trying to surrender at the Fort Pillow Massacre. This led to a breakdown of the prisoner exchange program, and the growth of prison camps such as Andersonville prison in Georgia where almost 13,000 Union prisoners of war died of starvation and disease.
In spite of the South's shortage of manpower, until 1865, most Southern leaders opposed arming slaves as soldiers. However,a few Confederates discussed arming slaves since the early stages of the war, and some free blacks had even offered to fight for the South. In 1862 Georgian Congressman Warren Akin supported the enrolling of slaves with the promise of emancipation, as did the Alabama legislature. Support for doing so also grew in other Southern states. A few all black Confederate militia units, most notably the 1st Louisiana Native Guard, were formed in Louisiana at the start of the war, but were disbanded in 1862. In early March, 1865, Virginia endorsed a bill to enlist black soldiers, and on March 13 the Confederate Congress did the same.
There still were over 250,000 slaves in Texas. Word did not reach Texas about the collapse of the Confederacy until June 19, 1865. African Americans and others celebrate that day as Juneteenth, the day of freedom, in Texas, Oklahoma and some other states. It commemorates the date when the news finally reached slaves at Galveston, Texas.
Legally, the last 40,000 or so slaves were freed in Kentucky by the final ratification of the Thirteenth Amendment to the Constitution in December 1865. Slaves still held in New Jersey, Delaware, West Virginia, Maryland and Missouri also became legally free on this date.
Consequently, many religious organizations, former Union Army officers and soldiers, and wealthy philanthropists were inspired to create and fund educational efforts specifically for the betterment of African Americans in the South. They helped create normal schools to generate teachers, such as those which eventually became Hampton University and Tuskegee University. Stimulated by the work of educators such as Dr. Booker T. Washington, by the first part of the 20th century over 5,000 local schools had been built for blacks in the South using private matching funds provided by individuals such as Henry H. Rogers, Andrew Carnegie, and most notably, Julius Rosenwald, each of whom had arisen from modest roots to become wealthy.
On July 30, 2008, the United States House of Representatives passed a resolution apologizing for American slavery and subsequent discriminatory laws.
In the 19th century, proponents of slavery often defended the institution as a "necessary evil". It was feared that emancipation would have more harmful social and economic consequences than the continuation of slavery. In 1820, Thomas Jefferson wrote in a letter that with slavery:
Robert E. Lee wrote in 1856:
Others who also moved from the idea of necessary evil to positive good are James Henry Hammond and George Fitzhugh. Hammond, like Calhoun, believed slavery was needed to build the rest of society. In a speech to the Senate on March 4, 1858, Hammond developed his Mudsill Theory defending his view on slavery stating, “Such a class you must have, or you would not have that other class which leads progress, civilization, and refinement. It constitutes the very mud-sill of society and of political government; and you might as well attempt to build a house in the air, as to build either the one or the other, except on this mud-sill.” He argued that the hired laborers of the North are slaves too: “The difference… is, that our slaves are hired for life and well compensated; there is no starvation, no begging, no want of employment,” while those in the North had to search for employment. George Fitzhugh wrote that, “the Negro is but a grown up child, and must be governed as a child.” In "The Universal Law of Slavery" Fitzhugh argues that slavery provides everything necessary for life and that the slave is unable to survive in a free world because he is lazy, and cannot compete with the intelligent white race.
Slavery of Native Americans was organized in colonial and Mexican California through Franciscan missions, theoretically entitled to ten years of Native labor, but in practice maintaining them in perpetual servitude, until their charge was revoked in the mid-1830s. Following the 1847–1848 invasion by U.S. troops, Native Californians were enslaved in the new state from statehood in 1850 to 1867. Slavery required the posting of a bond by the slave holder and enslavement occurred through raids and a four-month servitude imposed as a punishment for Indian "vagrancy".
The nature of slavery in Cherokee society often mirrored that of white slave-owning society. The law barred intermarriage of Cherokees and blacks, whether slave or free. Cherokee who aided slaves were punished with one hundred lashes on the back. In Cherokee society, blacks were barred from holding office, bearing arms, and owning property, and it was illegal to teach blacks to read and write.
Historian Ira Berlin wrote:
Free blacks were perceived “as a continual symbolic threat to slaveholders, challenging the idea that ‘black’ and ‘slave’ were synonymous.” Free blacks were seen as potential allies of fugitive slaves and “slaveholders bore witness to their fear and loathing of free blacks in no uncertain terms. For free blacks, who had only a precarious hold on freedom, “slave ownership was not simply an economic convenience but indispensable evidence of the free blacks” determination to break with their slave past and their silent acceptance of – if not approval – of slavery.”
Historian James Oakes notes that, “The evidence is overwhelming that the vast majority of black slaveholders were free men who purchased members of their families or who acted out of benevolence.” In the early part of the 19th century, southern states made it increasingly difficult for any slaveholders to free slaves. Often the purchasers of family members were left with no choice but to maintain, on paper, the owner-slave relationship. In the 1850s “there were increasing efforts to restrict the right to hold bondsmen on the grounds that slaves should be kept ‘as far as possible under the control of white men only.”
Kolchin described the state of historiography in the early twentieth century as follows:
Historians James Oliver Horton and Louise Horton described Phillips' mindset, methodology and influence:
The racist attitude concerning slaves carried over into the historiography of the Dunning School of reconstruction history, which dominated in the early 20th century. Writing in 2005, historian Eric Foner states:
Beginning in the 1930s and 1940s, historiography moved away from the “overt” racism of the Phillips era. However, historians still emphasized the slave as an object. Whereas Phillips presented the slave as the object of benign attention by the owners, historians such as Kenneth Stampp changed the emphasis to the mistreatment and abuse of the slave.
In the culmination of the slave as victim, Historian Stanley M. Elkins in his 1959 work “Slavery: A Problem in American Institutional and Intellectual Life” compared United States slavery to the brutality of the Nazi concentration camps. He stated the institution destroyed the will of the slave, creating an “emasculated, docile Sambo” who identified totally with the owner. Elkins' thesis immediately was challenged by historians. Gradually historians recognized that in addition to the effects of the owner-slave relationship, slaves did not live in a “totally closed environment but rather in one that permitted the emergence of enormous variety and allowed slaves to pursue important relationships with persons other than their master, including those to be found in their families, churches and communities.”
Robert W. Fogel and Stanley L. Engerman in the 1970s, through their work "Time on the Cross," presented the final attempt to salvage a version of the Sambo theory, picturing slaves as having internalized the Protestant work ethic of their owners. In portraying the more benign version of slavery, they also argue in their 1974 book that the material conditions under which the slaves lived and worked compared favorably to those of free workers in the agriculture and industry of the time.
In the 1970s and 1980s, historians made use of archaeological records, black folklore, and statistical data to describe a much more detailed and nuanced picture of slave life. Relying also on autobiographies of ex-slaves and former slave interviews conducted in the 1930s by the Federal Writers' Project, historians described slavery as the slaves experienced it. Far from slaves' being strictly victims or content, historians showed slaves as both resilient and autonomous in many of their activities. Despite the efforts at autonomy and their efforts to make a life within slavery, current historians recognize the precariousness of the slave's situation. Slave children quickly learned that they were subject to the direction of both their parents and their owners. They saw their parents disciplined just as they came to realize that they also could be physically or verbally abused by their owners. Historians writing during this era include John Blassingame (“Slave Community”), Eugene Genovese (“Roll, Jordon, Roll”), Leslie Howard Owens (“This Species of Property”), and Herbert Gutman (“The Black Family in Slavery and Freedom”).
Although slave ownership by private individuals and businesses has been illegal in the United States since 1865, the Thirteenth Amendment to the United States Constitution specifically exempts the judiciary, permitting the enslavement of individuals "as a punishment for crime where of the party shall have been duly convicted".
The United States Department of Labor occasionally prosecutes cases against people for false imprisonment and involuntary servitude. These cases often involve illegal immigrants who are forced to work as slaves in factories to pay off a debt claimed by the people who transported them into the United States. Other cases have involved domestic workers.
Reports of child sexual slavery and on the business of working children in organized criminal businesses as well as in legitimate businesses and trading sexual favours for contracts and business in the United States under both inhuman and human conditions exist.
In 2002, the U.S. Department of State repeated an earlier CIA estimate that each year, about 50,000 women and children are brought against their will to the United States for sexual exploitation. Former Secretary of State Colin Powell said that "Here and abroad, the victims of trafficking toil under inhuman conditions -- in brothels, sweatshops, fields and even in private homes.
Contract Notice: Parks and Wildlife Department Issues Solicitation for Hurricane Damaged Levee, Mud Sill, Terraces and Wave Reduction Fencing services (Texas)
Jan 04, 2011; AUSTIN, Texas, Jan. 4 -- Parks and Wildlife Department has posted a solicitation on Jan. 3 for hurricane damaged levee, mud sill,... | http://www.reference.com/browse/mud-sill | 13 |
53 | African slave trade
From Wikipedia, the free encyclopedia
|By country or region|
|Opposition and resistance|
- This article discusses systems, history, and effects of slavery within Africa. See Maafa, Atlantic slave trade, Arab slave trade, and Slavery in modern Africa for other discussions.
The African slave trade refers to the historic slave trade within Africa. Slavery in Africa has existed throughout the continent for many centuries to the current day. Systems of servitude and slavery were common in many parts of the continent, as they were in much of the ancient world. In most African societies, the enslaved people were also indentured servants and fully integrated, but not as Chattel slaves. When the Arab slave trade and Atlantic slave trade began, many local slave systems changed and began supplying captives for slave markets outside of Africa.
Slavery in historical Africa was practiced in many different forms and some of these do not clearly fit the definitions of slavery elsewhere in the world. Debt slavery (in Africa known as pawnship), enslavement of war captives, military slavery, slavery for sacrifice, and concubinage were all practiced in various parts of Africa.
Slavery was a small part of the economic life of many societies in Africa until the introduction of transcontinental slave trades (Arab and Atlantic). Although there had been some trans-Saharan trade from the interior of Sub-Saharan Africa to North Africa, the Horn of Africa, Middle East, and Europe.
Slave practices were again transformed with European colonization of Africa and the formal abolition of slavery in the early 1900s.
Forms of slavery
Multiple forms of slavery and servitude have existed throughout Africa during history and were shaped by indigenous practices of slavery as well as the Roman institution of slavery, the Islamic institutions of slavery, and eventually by the Atlantic slave trade. Slavery existed in all regions of Africa (like the rest of the world) and was a part of the economic structure of many societies for many centuries, although the extent varied. In sub-Saharan Africa, the slave relationships were oftentimes complex with rights and freedoms given to individuals held in slavery and restrictions on sale and treatment by their masters. Many communities had hierarchies between different types of slaves: for example, differentiating between those who had been born into slavery and those who had been capture through war. In many African societies, there was very little difference between the free peasants and the feudal vassal peasants. Enslaved people of the Songhay Empire were used primarily in agriculture; they paid tribute to their masters in crop and service but they were slightly restricted in custom and convenience. These non-free people were more an occupational caste, as their bondage was relative.
Scottish explorer Mungo Park wrote:
The slaves in Africa, I suppose, are nearly in the proportion of three to one to the freemen. They claim no reward for their services except food and clothing, and are treated with kindness or severity, according to the good or bad disposition of their masters. Custom, however, has established certain rules with regard to the treatment of slaves, which it is thought dishonourable to violate. Thus the domestic slaves, or such as are born in a man’s own house, are treated with more lenity than those which are purchased with money. ... But these restrictions on the power of the master extend not to the care of prisoners taken in war, nor to that of slaves purchased with money. All these unfortunate beings are considered as strangers and foreigners, who have no right to the protection of the law, and may be treated with severity, or sold to a stranger, according to the pleasure of their owners.—Mungo Park, Travels in the Interior of Africa
Slavery in African cultures was generally more like indentured servitude, although in certain parts of sub-Saharan Africa, slaves were used for human sacrifices in annual rituals, such as those rituals practiced by the denizens of Dahomey. Slaves were often not the chattel of other men, nor enslaved for life. Unfortunately this rarely extended to the slave traders and transporters, who preferred to weed out the "worthless, weak "individuals.
In regards to the indigenous slave trade, Dr. Akurang-Parry has said that:
The viewpoint that “Africans” enslaved “Africans” is obfuscating if not troubling. The deployment of “African” in African history tends to coalesce into obscurantist constructions of identities that allow scholars, for instance, to subtly call into question the humanity of “all” Africans. Whenever Asante rulers sold non-Asantes into slavery, they did not construct it in terms of Africans selling fellow Africans. They saw the victims for what they were, for instance, as Akuapems, without categorizing them as fellow Africans. Equally, when Christian Scandinavians and Russians sold war captives to the Islamic people of the Abbasid Empire, they didn’t think that they were placing fellow Europeans into slavery. This lazy categorizing homogenizes Africans and has become a part of the methodology of African history; not surprisingly, the Western media’s cottage industry on Africa has tapped into it to frame Africans in inchoate generalities allowing the media to describe local crisis in one African state as “African” problem.—Dr. Akurang-Parry, Ending the Slavery Blame
The forms of slavery in Africa were closely related to kinship structures. In many African communities, where land could not be owned, enslavement of individuals was used as a means to increase the influence a person had and expand connections. This made slaves a permanent part of a master's lineage and the children of slaves could become closely connected with the larger family ties. Children of slaves born into families could be integrated into the master's kinship group and rise to prominent positions within society, even to the level of chief in some instances. However, stigma often remained attached and there could be strict separations between slave members of a kinship group and those related to the master.
Chattel slavery is a servitude relationship where the slave is treated as the property of the owner. As such, the owner is free to sell, trade, or treat the slave as they would other pieces of property and the children of the slave often are retained as the property of the master. Chattel slavery was practiced in the Nile river valley and Northern Africa but there is little evidence of widespread chattel slavery being practiced in sub-Saharan Africa prior to the expansion of Islamic legal systems which permitted this form of slavery as a means of conversion.
Many slave relationships in Africa revolved around domestic slavery, where slaves would work primarily in the house of the master but retain some freedoms. Domestic slaves could be considered part of the master's household and would not be sold to others without extreme cause. The slaves could own the profits from their labor (whether in land or in products) and could marry and pass the land on to their children in many cases.
Pawnship, or debt bondage slavery, involves the use of people as collateral to secure the repayment of debt. Slave labor is performed by the debtor, or a relative of the debtor (usually a child). Pawnship was a common form of collateral in West Africa, which involved the pledge of a person (or a member of the person's family) to service to a person providing credit. Pawnship was related to, yet distinct from slavery in most conceptualizations because the arrangement could include limited, specific terms of services to be provided and because kinship ties would protect the person from being sold into slavery. Pawnship was a common practice prior to European contact throughout West Africa, including amongst the Akan people, the Ewe people, the Ga people, the Yoruba people, and the Edo people (in modified forms, it also existed amongst the Efik people, the Igbo people, the Ijaw people, and the Fon people).
Military slavery involved the acquisition and training of conscripted military units which would retain the identity of military slaves even after their service. Slave soldier groups would be run by a Patron, who could be the head of a government or an independent warlord, and who would send his troops out for money and his own political interests.
This was most significant in the Nile valley (primarily in Sudan and Uganda), with slave military units organized by various Islamic authorities, and with the war chiefs of Western Africa. The military units in Sudan were formed in the 1800s through large-scale military raiding in the area which is currently the countries of Sudan and South Sudan.
Slaves for sacrifice
Local slave trade
Several nations such as the Ashanti of present-day Ghana and the Yoruba of present-day Nigeria were involved in slave-trading. Groups such as the Imbangala of Angola and the Nyamwezi of Tanzania would serve as intermediaries or roving bands, waging war on African states to capture people for export as slaves. Historians John Thornton and Linda Heywood of Boston University estimate that 90 percent of those shipped to the New World were enslaved by Africans and then sold to European traders. Henry Louis Gates, the Harvard Chair of African and African American Studies, has stated that "without complex business partnerships between African elites and European traders and commercial agents, the slave trade to the New World would have been impossible, at least on the scale it occurred."
Slavery practices throughout Africa
Like most other regions of the world, slavery and forced labor existed in many kingdoms and societies of Africa for thousands of years. Precise evidence on slavery or the political and economic institutions of slavery before contact with the Arab or Atlantic slave trade is not available. The complex relationships and evidence from oral histories often incorrectly describe many forms of servitude or social status as slavery, even when the practices do not follow conceptualizations of slavery in other regions around the world.
The best evidence of slave practices in Africa come from the major kingdoms, particularly along the coast, and there is little evidence of widespread slavery practices in stateless societies. Slave trading was mostly secondary to other trade relationships; however, there is evidence of a trans-Saharan slave trade route from Roman times which persisted in the area after the fall of the Roman empire. However, kinship structures and rights provided to slaves (except those captured in war) appears to have limited the scope of slave trading before the start of the Arab slave trade and the Atlantic slave trade.
Chattel slavery had been legal and widespread throughout North Africa when the region was controlled by the Roman Empire (47 BC - ca. 500 AD). The Sahel region south of the Sahara provided many of the African slaves held in North Africa during this period and there was a trans-Saharan slave trade in operation. Chattel slavery persisted after the fall of the Roman empire in the largely Christian communities of the region. After the Islamic expansion into most of the region, the practices continued and eventually, the chattel form of slavery spread to major societies on the southern end of the Sahara (such as Mali, Songhai, and Ghana).
The medieval slave trade in Europe was mainly to the East and South: the Byzantine Empire and the Muslim World were the destinations, Central and Eastern Europe an important source of slaves. Slavery in medieval Europe was so widespread that the Roman Catholic Church repeatedly prohibited it—or at least the export of Christian slaves to non-Christian lands was prohibited at, for example, the Council of Koblenz in 922, the Council of London in 1102, and the Council of Armagh in 1171. Because of religious constraints, the slave trade was monopolised in parts of Europe by Iberian Jews (known as Radhanites) who were able to transfer slaves from pagan Central Europe through Christian Western Europe to Muslim countries in Al-Andalus and Africa. So many Slavs were enslaved for so many centuries that word 'Slav' became synonymous with slavery. The derivation of the word slave encapsulates a bit of European history and explains why the two words (slaves and Slavs) are so similar; they are, in fact, historically identical.
The Mamluks were slave soldiers who converted to Islam and served the Muslim caliphs and the Ayyubid sultans during the Middle Ages. The first mamluks served the Abbasid caliphs in 9th century Baghdad. Over time they became a powerful military caste, and on more than one occasion they seized power for themselves, for example, ruling Egypt from 1250–1517. From 1250 Egypt had been ruled by the Bahri dynasty of Kipchak Turk origin. White enslaved people from the Caucasus served in the army and formed an elite corps of troops eventually revolting in Egypt to form the Burgi dynasty.
According to Robert Davis between 1 million and 1.25 million Europeans were captured by Barbary pirates and sold as slaves to North Africa and the Ottoman Empire between the 16th and 19th centuries. The coastal villages and towns of Italy, Portugal, Spain and Mediterranean islands were frequently attacked by the pirates and long stretches of the Italian and Spanish coasts were almost completely abandoned by their inhabitants; after 1600 Barbary pirates occasionally entered the Atlantic and struck as far north as Iceland. The most famous corsairs were the Ottoman Barbarossa ("Redbeard"), and his older brother Oruç, Turgut Reis (known as Dragut in the West), Kurtoğlu (known as Curtogoli in the West), Kemal Reis, Salih Reis and Koca Murat Reis.
In 1544, Hayreddin Barbarossa captured Ischia, taking 4,000 prisoners in the process, and deported to slavery some 9,000 inhabitants of Lipari, almost the entire population. In 1551, Dragut enslaved the entire population of the Maltese island Gozo, between 5,000 and 6,000, sending them to Libya. When pirates sacked Vieste in southern Italy in 1554 they took an estimated 7,000 slaves. In 1555, Turgut Reis sailed to Corsica and ransacked Bastia, taking 6000 prisoners. In 1558 Barbary corsairs captured the town of Ciutadella, destroyed it, slaughtered the inhabitants and carried off 3,000 survivors to Istanbul as slaves. In 1563 Turgut Reis landed at the shores of the province of Granada, Spain, and captured the coastal settlements in the area like Almuñécar, along with 4,000 prisoners. Barbary pirates frequently attacked the Balearic islands, resulting in many coastal watchtowers and fortified churches being erected. The threat was so severe that Formentera became uninhabited.
Sahrawi-Moorish society in Northwest Africa was traditionally (and still is, to some extent) stratified into several tribal castes, with the Hassane warrior tribes ruling and extracting tribute – horma – from the subservient Berber-descended znaga tribes.
Horn of Africa
In the Horn of Africa, the Solomonic dynasty of the Ethiopian Highlands often exported Nilotic slaves from their western borderland provinces, or from newly conquered or reconquered lowland territories. The Somali and Afar Muslim sultanates, such as the medieval Adal Sultanate, through their ports also traded Zanj (Bantu) slaves that were captured from the hinterland.
Slavery as practised in what is modern Ethiopia and Eritrea was essentially domestic. Slaves thus served in the houses of their masters or mistresses, and were not employed to any significant extent for productive purpose. Slaves were thus regarded as second-class members of their owners' family, and were fed, clothed and protected. They generally roamed around freely and conducted business as free people. They had complete freedom of religion and culture. The first attempt to abolish slavery in Ethiopia was made by Emperor Tewodros II (r. 1855–1868), although the slave trade was not abolished completely until 1923 with Ethiopia's ascension to the League of Nations. Anti-Slavery Society estimated there were 2 million slaves in the early 1930s out of an estimated population of between 8 and 16 million. Slavery continued in Ethiopia until the Italian invasion in October 1935, when the institution was abolished by order of the Italian occupying forces. In response to pressure by Western Allies of World War II, Ethiopia officially abolished slavery and involuntary servitude after having regained its independence in 1942. On 26 August 1942, Haile Selassie issued a proclamation outlawing slavery.
Bantu adult and children slaves (referred to collectively as jareer by their Somali masters) were purchased in the slave market exclusively to do work on plantation grounds. They toiled under the control of and separately from their Somali patrons. In terms of legal considerations, Bantu slaves were devalued. Additionally, Somali social mores strongly discouraged, censured and looked down upon any kind of sexual contact with Bantu slaves. Freedom for these plantation slaves was also often acquired through escape.
Oral tradition recounts slavery existing in the Kingdom of Kongo from the time of its formation with Lukeni lua Nimi enslaving the Mwene Kabunga whom he conquered to establish the kingdom. Early Portuguese writings show that the Kingdom did have slavery before contact, but that they were primarily war captives from the Kingdom of Ndongo.
Slavery was practiced in diverse ways in the different communities of West Africa prior to European trade. With the development of the trans-Saharan slave trade and the economies of gold in the Western Sahel, a number of the major states became organized around the slave trade, including the Ghana Empire, the Mali Empire, and Songhai Empire. However, other communities in West Africa largely resisted the slave trade. The Mossi Kingdoms tried to take over key sites in the trans-Saharan trade and, when these efforts failed, the Mossi became defenders against slave raiding by the powerful states of the Western Sahel. The Mossi would eventually entere the slave trade in the 1800s with the Atlantic slave trade being the main market. Similarly, Walter Rodney identified no slavery or significant domestic servitude in early European accounts on the Upper Guinea region and I.A. Akinjogbin contends that European accounts reveal that the slave trade was not a major activity along the coast controlled by the Yoruba people and Aja people before Europeans arrived. With the beginning of the Atlantic slave trade, demand for slavery in West Africa increased and a number of states became centered on the slave trade and domestic slavery increased dramatically.
In Senegambia, between 1300 and 1900, close to one-third of the population was enslaved. In early Islamic states of the western Sudan, including Ghana (750–1076), Mali (1235–1645), Segou (1712–1861), and Songhai (1275–1591), about a third of the population were enslaved. In Sierra Leone in the 19th century about half of the population consisted of enslaved people. In the 19th century at least half the population was enslaved among the Duala of the Cameroon and other peoples of the lower Niger, the Kongo, and the Kasanje kingdom and Chokwe of Angola. Among the Ashanti and Yoruba a third of the population consisted of enslaved people. The population of the Kanem (1600–1800) was about a third-enslaved. It was perhaps 40% in Bornu (1580–1890). Between 1750 and 1900 from one- to two-thirds of the entire population of the Fulani jihad states consisted of enslaved people. The population of the Sokoto caliphate formed by Hausas in the northern Nigeria and Cameroon was half-enslaved in the 19th century.
When British rule was first imposed on the Sokoto Caliphate and the surrounding areas in northern Nigeria at the turn of the 20th century, approximately 2 million to 2.5 million people there were enslaved. Slavery in northern Nigeria was finally outlawed in 1936.
African Great Lakes
With sea trade from the eastern African Great Lakes region to Persia, China, and India during the first millennium AD, slaves are mentioned as a commodity of secondary importance to gold and ivory. When mentioned, the slave trade appears to be of a small-scale and mostly involve slave raiding of women and children along the islands of Kilwa Kisiwani, Madagascar and Pemba. Historians Campbell and Alpers argue that there were a host of different categories of labor in East Africa and that the distinction between slave and free individuals was not particularly relevant in most societies. However, with increasing international trade in the 18th and 19th century, East Africa began to be involved significantly in the Atlantic slave trade; for example, with the king of Kilwa island signing a treaty with a French merchant in 1776 for the delivery of 1,000 slaves per year. At about the same time, merchants from Oman, India, and East Africa began establishing plantations along the coasts and on the islands. To provide workers on these plantations, slave raiding and slave holding became increasingly important in the region and slave traders (most notably Tippu Tip) became prominent in the political environment of the region. The East African trade reached its height in the early decades of the 1800s with up to 30,000 slaves sold per year. However, slavery never became a significant part of the domestic economies except in Sultanate of Zanzibar where plantations and agricultural slavery were maintained.
In the Great Lakes region of Africa (around present-day Uganda), linguistic evidence shows the existence of slavery through war capture, trade, and pawning going back hundreds of years; however, these forms, particularly pawning, appear to have increased significantly in the 18th and 19th centuries.
Transformations of slavery in Africa
Slave relationships in Africa have been transformed through three large scale processes: the Arab slave trade, the Atlantic slave trade, and the slave emancipation policies and movements in the 20th century. Each of these processes significantly changed the forms, level, and economics of slavery in Africa.
Slave practices in Africa were used during different periods to justify specific forms of European engagement with the peoples of Africa. Eighteenth century writers in Europe claimed that slavery in Africa was quite brutal in order to justify the Atlantic slave trade. Later writers used similar arguments to justify intervention and eventual colonization by European powers to end slavery in Africa.
Africans knew of the harsh slavery that awaited slaves in the New World. Many elite Africans visited Europe on slave ships following the prevailing winds through the New World. One example of this occurred when Antonio Manuel, Kongo’s ambassador to the Vatican, went to Europe in 1604, stopping first in Bahia, Brazil, where he arranged to free a countryman who had been wrongfully enslaved. African monarchs also sent their children along these same slave routes to be educated in Europe, and thousands of former slaves eventually returned to settle Liberia and Sierra Leone.
Trans-Saharan and Indian Ocean trade
The Arab slave trade, established in the 8th and 9th centuries AD involved a small-scale movement of people largely from East Africa and the Sahel. Islam allowed chattel slavery but prohibited this form of slavery if it involved other Muslims; as a result, the main target for slavery were the people who lived in the frontier areas of Islam in Africa. The trade of slaves across the Sahara and across the Indian Ocean also has a long history beginning with the control of sea routes by Afro-Arab traders in the ninth century. It is estimated that only a few thousand enslaved people were taken each year from the Red Sea and Indian Ocean coast. They were sold throughout the Middle East. This trade accelerated as superior ships led to more trade and greater demand for labour on plantations in the region. Eventually, tens of thousands per year were being taken. In east Africa the main slave trade involved arabised east Africans
This changed the slave relationships by creating new forms of employment by slaves (as eunuchs to guard harems and in military units) and creating conditions for freedom (namely conversion--although it would only free a slave's children). Although the level of the trade remained small, the size of total slaves traded grew to a large number of the multiple centuries of its existence. Because of its small and gradual nature, the impact on slavery practices in communities that did not convert to Islam was relatively small. However, in the 1800s, the slave trade from Africa to the Islamic countries picked up significantly. When the European slave trade ended around the 1850s, the slave trade to the east picked up significantly only to be ended with European colonization of Africa around 1900.
David Livingstone wrote of the slave trade: "To overdraw its evils is a simple impossibility ... We passed a slave woman shot or stabbed through the body and lying on the path. [Onlookers] said an Arab who passed early that morning had done it in anger at losing the price he had given for her, because she was unable to walk any longer. We passed a woman tied by the neck to a tree and dead ... We came upon a man dead from starvation ... The strangest disease I have seen in this country seems really to be broken heartedness, and it attacks free men who have been captured and made slaves." Livingstone estimated that 80,000 Africans died each year before ever reaching the slave markets of Zanzibar. Zanzibar was once East Africa's main slave-trading port, and under Omani Arabs in the 19th century as many as 50,000 slaves were passing through the city each year.
Atlantic slave trade
The Atlantic slave trade radically transformed slavery practices outside of the areas directly controlled by Muslim governments (who largely continued Islamic forms of slavery). The Atlantic slave trade was so significant that it transformed Africans from a small percentage of the global population of slaves in 1600 into the overwhelming majority by 1800. In non-Islamic parts of Africa, the slavery practices and institutions were changed dramatically. The slave trade was transformed from a marginal aspect of the economies into the largest sector in a relatively short span. In addition, agricultural plantations increased significantly and became a key aspect in many societies. Finally, it transformed the traditional distribution of the slave practices. African and Arab slave trades had large demand for women and children (who would be trained in various crafts), but the European slave traders demanded men increasing the practices of slave raiding and getting slaves through warfare.
The first Europeans to arrive on the coast of Guinea were the Portuguese; the first European to actually buy enslaved Africans in the region of Guinea was Antão Gonçalves, a Portuguese explorer in 1441 AD. Originally interested in trading mainly for gold and spices, they set up colonies on the uninhabited islands of São Tomé. In the 16th century the Portuguese settlers found that these volcanic islands were ideal for growing sugar. Sugar growing is a labour-intensive undertaking and Portuguese settlers were difficult to attract due to the heat, lack of infrastructure, and hard life. To cultivate the sugar the Portuguese turned to large numbers of enslaved Africans. Elmina Castle on the Gold Coast, originally built by African labour for the Portuguese in 1482 to control the gold trade, became an important depot for slaves that were to be transported to the New World.
The Spanish were the first Europeans to use enslaved Africans in the New World on islands such as Cuba and Hispaniola, where the alarming death rate in the native population had spurred the first royal laws protecting the native population (Laws of Burgos, 1512–1513). The first enslaved Africans arrived in Hispaniola in 1501 soon after the Papal Bull of 1493 gave all of the New World to Spain.
The Atlantic slave trade peaked in the late 18th century, when the largest number of slaves were captured on raiding expeditions into the interior of West Africa. The increase of demand for slaves due to the expansion of European colonial powers to the New World made the slave trade much more lucrative to the West African powers, leading to the establishment of a number of actual West African empires thriving on slave trade. These included Oyo empire (Yoruba), Kong Empire, Kingdom of Benin, Imamate of Futa Jallon, Imamate of Futa Toro, Kingdom of Koya, Kingdom of Khasso, Kingdom of Kaabu, Fante Confederacy, Ashanti Confederacy, and the kingdom of Dahomey. The gradual abolition of slavery in European colonial empires during the 19th century again led to the decline and collapse of these African empires. These kingdoms that relied on a militaristic culture of constant warfare to generate the great numbers of human captives required for trade with the Europeans.
When European powers began to actively stop the Atlantic slave trade, this caused a further transformation in that large holders of slaves in Africa transitioned to putting the slaves to work on plantations and other agricultural products.
The final major transformation of slave relationships came with the inconsistent emancipation policies and movements starting in the mid-1800s. Colonial policies were often confusing on the issues; for example, even when slavery was illegal, colonial authorities would return escaped slaves to their masters. Slavery persisted in many countries under colonial rule and in many parts it was not until independence that slavery practices were significantly transformed. Although independence struggles generally brought former slaves and former masters together to fight for independence, in the 1960s many organized politically based on these former stratifications. In some parts of Africa, slavery and slavery-like practices continue to this day and the problem has proven to be difficult for governments and civil society to eliminate.
Beginning in the late 18th century, France was one of Europe's first countries to abolish slavery, in 1794, but it was revived by Napoleon in 1802, and banned for good in 1848. Denmark-Norway was the first European country to ban the slave trade. This happened with a decree issued by the king in 1792, to become fully effective by 1803. Slavery itself was not banned until 1848. In 1807 the British Parliament passed the Abolition of the Slave Trade Act, under which captains of slave ships could be stiffly fined for each slave transported. This was later superseded by the 1833 Slavery Abolition Act, which freed all slaves in the British Empire. Abolition was then extended to the rest of Europe. The 1820 U.S. Law on Slave Trade made slave trading piracy, punishable by death. In 1827, Britain declared the slave trade to be piracy, punishable by death. The power of the Royal Navy was subsequently used to suppress the slave trade, and while some illegal trade, mostly with Brazil, continued, the Atlantic slave trade was eradicated in the year 1850 by senator Eusebio de Queiroz, Minister of Justice of the Empire of Brazil, the law was called Law Eusebio de Queiroz. After struggles that lasted for decades in the Empire of Brazil, slavery was abolished completely in 1888 by Princess Isabel of Brazil and Minister Rodrigo Silva (son-in-law of senator Eusebio de Queiroz). The West Africa Squadron was credited with capturing 1,600 slave ships between 1808 and 1860 and freeing 150,000 Africans who were aboard these ships. Action was also taken against African leaders who refused to agree to British treaties to outlaw the trade, for example against ‘the usurping King of Lagos’, deposed in 1851. Anti-slavery treaties were signed with over 50 African rulers.
The Islamic trans-Saharan and Indian Ocean trades continued, however, and even increased as new sources of enslaved people became available. In Caucasus, slavery was abolished after Russian conquest. The slave trade within Africa also increased. The British Navy could suppress much of the trade in the Indian Ocean, but the European powers could do little to affect the land-based intra-continental trade.
The continuing anti-slavery movement in Europe became an excuse and a casus belli for the European conquest and colonisation of much of the African continent. In the late 19th century, the Scramble for Africa saw the continent rapidly divided between Imperialistic European powers, and an early but secondary focus of all colonial regimes was the suppression of slavery and the slave trade. In response to this pressure, Ethiopia officially abolished slavery in 1932. By the end of the colonial period they were mostly successful in this aim, though slavery is still very active in Africa even though it has gradually moved to a wage economy. Independent nations attempting to westernise or impress Europe sometimes cultivated an image of slavery suppression, even as they, in the case of Egypt, hired European soldiers like Samuel White Baker's expedition up the Nile. Slavery has never been eradicated in Africa, and it commonly appears in African states, such as Chad, Ethiopia, Mali, Niger, and Sudan, in places where law and order have collapsed. See also Slavery in modern Africa.
Although outlawed in nearly all countries today, slavery is practised in secret in many parts of the world. There are an estimated 27 million victims of slavery worldwide. In Mauritania alone, up to 600,000 men, women and children, or 20% of the population, are enslaved, many of them used as bonded labour. Slavery in Mauritania was finally criminalised in August 2007. It is estimated that as many as 200,000 Sudanese children and women have been taken into slavery in Sudan during the Second Sudanese Civil War. In Niger, where the practice of slavery was outlawed in 2003, a study found that almost 8% of the population are still slaves. In many Western countries, slavery is still prevalent[contradictory] in the form of sexual slavery.
The demographic effects of the slave trade are some of the most controversial and debated issues. Walter Rodney argued that the export of so many people had been a demographic disaster and had left Africa permanently disadvantaged when compared to other parts of the world, and that this largely explains that continent's continued poverty. He presents numbers that show that Africa's population stagnated during this period, while that of Europe and Asia grew dramatically. According to Rodney all other areas of the economy were disrupted by the slave trade as the top merchants abandoned traditional industries to pursue slaving and the lower levels of the population were disrupted by the slaving itself.
Others have challenged this view. J. D. Fage compared the number effect on the continent as a whole. David Eltis has compared the numbers to the rate of emigration from Europe during this period. In the nineteenth century alone over 50 million people left Europe for the Americas, a far higher rate than were ever taken from Africa.
Others have challenged this view. Joseph E. Inikori argues the history of the region shows that the effects were still quite deleterious. He argues that the African economic model of the period was very different from the European, and could not sustain such population losses. Population reductions in certain areas also led to widespread problems. Inikori also notes that after the suppression of the slave trade Africa's population almost immediately began to rapidly increase, even prior to the introduction of modern medicines. Shahadah also states that the trade was not only of demographic significance, in aggregate population losses but also in the profound changes to settlement patterns, epidemiological exposure and reproductive and social development potential. In addition, the majority of the slaves being taken to the Americas were male. So while the slave trade created an immediate drop in the population, its long term effects were even more drastic.
Effect on the economy of Africa
There is a longstanding debate amongst analysts and scholars about the destructive impacts of the slave trades. It is often claimed that the slave trade undermined local economies and political stability as villages' vital labour forces were shipped overseas as slave raids and civil wars became commonplace. With the rise of a large commercial slave trade, driven by European needs, enslaving your enemy became less a consequence of war, and more and more a reason to go to war. The slave trade, it is claimed, impeded the formation of larger ethnic groups, causing ethnic factionalism and weakening the formation for stable political structures in many places. It also is claimed to have reduced the mental health and social development of African people.
In contrast to these arguments, J.D. Fage asserts that slavery did not have a wholly disastrous effect on the societies of Africa. Slaves were an expensive commodity, and traders received a great deal in exchange for each enslaved person. At the peak of the slave trade hundreds of thousands of muskets, vast quantities of cloth, gunpowder, and metals were being shipped to Guinea. Most of this money was spent on British-made firearms (of very poor quality) and industrial-grade alcohol. Trade with Europe at the peak of the slave trade—which also included significant exports of gold and ivory—was some 3.5 million pounds Sterling per year. By contrast, the trade of the United Kingdom, the economic superpower of the time, was about 14 million pounds per year over this same period of the late 18th century. As Patrick Manning has pointed out, the vast majority of items traded for slaves were common rather than luxury goods. Textiles, iron ore, currency, and salt were some of the most important commodities imported as a result of the slave trade, and these goods were spread within the entire society raising the general standard of living.
Effects on Europe's economy
Karl Marx in his economic history of capitalism, Das Kapital, claimed that '...the turning of Africa into a warren for the commercial hunting of black-skins [that is, the slave trade], signalled the rosy dawn of the era of capitalist production.' He argued that the slave trade was part of what he termed the 'primitive accumulation' of European capital, the 'non-capitalist' accumulation of wealth that preceded and created the financial conditions for Britain's industrialisation.
Eric Williams has written about the contribution of Africans on the basis of profits from the slave trade and slavery, arguing that the employment of those profits were used to help finance Britain’s industrialisation. He argues that the enslavement of Africans was an essential element to the Industrial Revolution, and that European wealth was, in part, a result of slavery, but that by the time of its abolition it had lost its profitability and it was in Britain's economic interest to ban it. Joseph Inikori has written that the British slave trade was more profitable than the critics of Williams believe. Other researchers and historians have strongly contested what has come to be referred to as the “Williams thesis” in academia: David Richardson has concluded that the profits from the slave trade amounted to less than 1% of domestic investment in Britain, and economic historian Stanley Engerman finds that even without subtracting the associated costs of the slave trade (e.g., shipping costs, slave mortality, mortality of whites in Africa, defense costs) or reinvestment of profits back into the slave trade, the total profits from the slave trade and of West Indian plantations amounted to less than 5% of the British economy during any year of the Industrial Revolution. Historian Richard Pares, in an article written before Williams’ book, dismisses the influence of wealth generated from the West Indian plantations upon the financing of the Industrial Revolution, stating that whatever substantial flow of investment from West Indian profits into industry there was occurred after emancipation, not before.
Seymour Drescher and Robert Anstey argue the slave trade remained profitable until the end, because of innovations in agriculture, and that moralistic reform, not economic incentive, was primarily responsible for abolition.
A similar debate has taken place about other European nations. French slave trade, it is argued, was more profitable than alternative domestic investments and probably encouraged capital accumulation before the Industrial Revolution and Napoleonic Wars.
Legacy of racism
Maulana Karenga states that the effects of the Atlantic slave trade in African captives was "the morally monstrous destruction of human possibility involved redefining African humanity to the world, poisoning past, present and future relations with others who only know us through this stereotyping and thus damaging the truly human relations among people of today". He cites that it constituted the destruction of culture, language, religion and human possibility.
- Cudjoe Lewis purported as last African born slave of this era to be enslaved in the United States.
- Atlantic slave trade
- Arab slave trade
- Blockade of Africa
- Slavery in modern Africa
- Anti-Slavery operations of the United States Navy
- Barbary pirates
- Christianity and slavery
- Islamic views on slavery
- Slavery in Mauritania
- Slavery in Sudan
- Unfree labor
- Tippu Tip
- History of slavery
- History of slavery in the United States
- James Riley (Captain) white slaves in the Sahara
- Slave ship
- African Diaspora
- Basil Davidson, The African Slave Trade, pg 46 (Difference)
- "Anne C. Bailey, ''African Voices of the Atlantic Slave Trade: Beyond the Silence and the Shame''". Books.google.co.za. http://books.google.co.za/books?id=YrIjNMu5_vsC&q=Africans+were+equal+partners#v=snippet&q=Africans%20were%20equal%20partners&f=false.
- Owen 'Alik Shahadah. "The Legacy of the African Holocaust (Mafaa)". Africanholocaust.net. http://www.africanholocaust.net/html_ah/holocaustspecial.htm. Retrieved 1 April 2005.
- Lovejoy, Paul E. (2012). Transformations of Slavery: A History of Slavery in Africa. London: Cambridge University Press.
- Fage, J.D. (1969). "Slavery and the Slave Trade in the Context of West African History". The Journal of African History 10 (3): 393-404.
- Rodney, Walter (1966). "African Slavery and Other Forms of Social Oppression on the Upper Guinea Coast in the Context of the Atlantic Slave-Trade". The Journal of African History 7 (3): 431-443. JSTOR 180112.
- Mungo Park, Travels in the Interior of Africa v. II, Chapter XXII – War and Slavery.
- "Dahomey". Ouidah Museum of History. Archived from the original on 21 December 2009. http://www.museeouidah.org/Theme-Dahomey.htm. Retrieved 13 January 2010.
- "Dr. Akurang-Parry". Ghanaweb.com. 29 April 2010. http://www.ghanaweb.com/GhanaHomePage/NewsArchive/artikel.php?ID=180999.
- Snell, Daniel C. (2011). "Slavery in the Ancient Near East". In Keith Bradley and Paul Cartledge. The Cambridge World History of Slavery. New York: Cambridge University Press. pp. 4-21.
- Alexander, J. (2001). "Islam, Archaeology and Slavery in Africa". World Archaeology 33 (1): 44-60. JSTOR 827888.
- Paul E. Lovejoy and David Richardson (2001). "The Business of Slaving: Pawnship in Western Africa, c. 1600–1810". The Journal of African History 42 (1): 67–89.
- Johnson, Douglas H. (1989). "The Structure of a Legacy: Military Slavery in Northeast Africa". Ethnohistory 36 (1): 72-88.
- Wylie, Kenneth C. (1969). "Innovation and Change in Mende Chieftaincy 1880–1896". The Journal of African History 10 (2): 295–308. JSTOR 179516.
- Henry Louis Gates Jr.. "Ending the Slavery Blame-Game". Archived from the original on 23 April 2010. http://www.nytimes.com/2010/04/23/opinion/23gates.html. Retrieved 2012-03-26.
- Manning, Patrick (1983). "Contours of Slavery and Social Change in Africa". American Historical Review 88 (4): 835-857.
- "Historical survey > The international slave trade". Britannica.com. http://www.britannica.com/blackhistory/article-24159.
- "Slavery, serfdom, and indenture through the Middle Ages". Scatoday.net. 3 February 2005. http://scatoday.net/node/3565.
- "Routes of the Jewish Merchants Called Radanites". Jewishencyclopedia.com. 14 November 1902. http://www.jewishencyclopedia.com/view.jsp?artid=693&letter=C#2276.
- "Definition/Word Origin of 'slave' from". The Free Dictionary. http://www.thefreedictionary.com/.
- "Christian Slaves, Muslim Masters: White Slavery in the Mediterranean, the Barbary Coast and Italy, 1500–1800". Robert Davis (2004). p.45. ISBN 1-4039-4551-9.
- "The Mamluk (Slave) Dynasty (Timeline)". Sunnahonline.com. http://www.sunnahonline.com/ilm/seerah/0075_popup11.htm.
- "''When Europeans were slaves: Research suggests white slavery was much more common than previously believed''". Researchnews.osu.edu. http://researchnews.osu.edu/archive/whtslav.htm.
- "BBC – History – British Slaves on the Barbary Coast". Bbc.co.uk. http://www.bbc.co.uk/history/british/empire_seapower/white_slaves_01.shtml.
- Richtel, Matt. "The mysteries and majesties of the Aeolian Islands". International Herald Tribune. http://www.iht.com/articles/2003/09/26/trsic_ed3_.php.
- "History of Menorca". Holidays2menorca.com. http://www.holidays2menorca.com/history.php.
- Christopher Hitchens. "Jefferson Versus the Muslim Pirates by Christopher Hitchens, City Journal Spring 2007". City-journal.org. http://www.city-journal.org/html/17_2_urbanities-thomas_jefferson.html.
- Davis, Robert. Christian Slaves, Muslim Masters: White Slavery in the Mediterranean, the Barbary Coast and Italy, 1500–1800.
- Pankhurst. Ethiopian Borderlands, pp.432
- Willie F. Page, Facts on File, Inc. (2001). Encyclopedia of African history and culture: African kingdoms (500 to 1500), Volume 2. Facts on File. p. 239. ISBN 0816044724. http://books.google.com/books?id=gK1aAAAAYAAJ.
- "Ethiopia – The Interregnum". Countrystudies.us. http://countrystudies.us/ethiopia/16.htm.
- "Ethiopian Slave Trade". http://www.africanholocaust.net/news_ah/ethiopianslavetrade.html.
- "Tewodros II". Infoplease.com. http://www.infoplease.com/ce6/people/A0848307.html.
- Kituo cha katiba >> Haile Selassie Profile
- "Twentieth Century Solutions of the Abolition of Slavery" (PDF). http://www.yale.edu/glc/events/cbss/Miers.pdf.
- Abdussamad H. Ahmad, "Trading in Slaves in Bela-Shangul and Gumuz, Ethiopia: Border Enclaves in History, 1897–1938", Journal of African History, 40 (1999), pp. 433–446 (Abstract)
- The slave trade: myths and preconceptions[dead link]
- Ethiopia[dead link]
- "Chronology of slavery". Archived from the original on 24 October 2009. http://www.webcitation.org/5kmCuElxY.
- Catherine Lowe Besteman, Unraveling Somalia: Race, Class, and the Legacy of Slavery, (University of Pennsylvania Press: 1999), p. 116.
- Catherine Lowe Besteman, Unraveling Somalia: Race, Class, and the Legacy of Slavery, (University of Pennsylvania Press: 1999), pp. 83–84
- Heywood, Linda M.; 2009. "Slavery and its transformations in the Kingdom of Kongo: 1491-1800". The Journal of African History 50: 122. doi:10.1017/S0021853709004228.
- Meillassoux, Claude (1991). The Anthropology of Slavery: The Womb of Iron and Gold. Chicago: University of Chicago Press.
- Akinjogbin, I.A. (1967). Dahomey and Its Neighbors: 1708-1818. Cambridge University Press. OCLC 469476592.
- Manning, Patrick (1990). Slavery and African Life: Occidental, Oriental, and African Slave Trades. London: Cambridge.
- "Welcome to Encyclopædia Britannica's Guide to Black History". Britannica.com. http://www.britannica.com/blackhistory/article-24157. Retrieved 17 November 2011.
- Slow Death for Slavery: The Course of Abolition in Northern Nigeria, 1897–1936 (review), Project MUSE – Journal of World History
- The end of slavery, BBC World Service | The Story of Africa
- Kusimba, Chapurukha M. (2004). "The African Archaeological Review". Archaeology of Slavery in East Africa 21 (2): 59-88. JSTOR 25130793.
- Campbell, Gwyn; Alpers, Edward A. (2004). "Introduction: Slavery, forced labour and resistance in Indian Ocean Africa and Asia". Slavery & Abolition 25 (2): ix-xxvii.
- Schoenbrun, David (2007). "Violence, Marginality, Scorn & Honor: Language Evidence of Slavery in the Eighteenth Century". Slavery in the Great Lakes Region of East Africa. Oxford, England: James Currey Ltd.. pp. 38-74.
- Klein, Martin A. (1978). "The Study of Slavery in Africa". The Journal of African History 19 (4): 599-609.
- Fage, J.D. A History of Africa. Routledge, 4th edition, 2001. pg. 258
- Gibbons, Fiachra (6 April 2002). "In the service of the Sultan". The Guardian (London). http://books.guardian.co.uk/reviews/history/0,6121,679352,00.html. Retrieved 23 April 2010.
- David Livingstone; Christian History Institute
- The blood of a nation of Slaves in Stone Town[dead link]
- Mwachiro, Kevin (30 March 2007). "BBC Remembering East African slave raids". BBC News. http://news.bbc.co.uk/2/hi/africa/6510675.stm.
- "Zanzibar". Archived from the original on 24 October 2009. http://www.webcitation.org/5kmCtlBDt.
- "Swahili Coast". .nationalgeographic.com. 17 October 2002. http://www7.nationalgeographic.com/ngm/data/2001/10/01/html/ft_20011001.6.html.
- Manning, Patrick (1990). "The Slave Trade: The Formal Demography of a Global System". Social Science History 14 (2): 255-279.
- John Henrik Clarke. Critical Lessons in Slavery & the Slavetrade. A & B Book Pub
- "CIA Factbook: Haiti". Cia.gov. https://www.cia.gov/library/publications/the-world-factbook/fields/2028.html?countryName=Haiti&countryCode=ha®ionCode=ca&#ha.
- "Health in Slavery". Of Germs, Genes, and Genocide: Slavery, Capitalism, Imperialism, Health and Medicine. United Kingdom Council for Human Rights. 1989. Archived from the original on 17 June 2008. http://web.archive.org/web/20080617150332/http://www.ukcouncilhumanrights.co.uk/webbook-chap1.html. Retrieved 13 January 2010.
- Bortolot, Alexander Ives (originally published October 2003, last revised May 2009). "The Transatlantic Slave Trade". Metropolitan Museum of Art. http://www.metmuseum.org/toah/hd/slav/hd_slav.htm. Retrieved 13 January 2010.
- Gueye, Mbaye (1979). "The slave trade within the African continent". The african slave trade from the fifteenth to the nineteenth century. Paris: UNESCO. pp. 150-163.
- Hahonou, Eric; Pelckmans, Lotte (2011). "West African Antislavery Movements: Citizenship Struggles and the Legacies of Slavery". Stichproben. Wiener Zeitschrift für kritische Afrikastudien (20): 141–162. http://www.univie.ac.at/ecco/stichproben/20_Pelckmans_Hahonou.pdf.
- Dottridge, Mike (2005). "Types of Forced Labour and Slavery-like Abuse Occurring in Africa Today: A Preliminary Classification". Cahiers d'Études Africaines 45 (179/180): 689–712.
- "The Historical encyclopedia of world slavery, Volume 1 By Junius P. Rodriguez". Books.google.co.uk. Retrieved 4 December 2011.. http://books.google.co.uk/books?id=ATq5_6h2AT0C&pg=PA8&dq=abolish+slavery+iceland&hl=en&ei=O9RSTI7CLueXOPPMzJ4O&sa=X&oi=book_result&ct=result&resnum=1&ved=0CCwQ6AEwAA#v=onepage&q=abolish%20slavery%20iceland&f=false.
- Carrell, Toni L. "The U.S. Navy and the Anti-Piracy Patrol in the Caribbean". NOAA. http://oceanexplorer.noaa.gov/explorations/08trouvadore/background/piracy/piracy.html. Retrieved 11 January 2010.
- A concise history of Brazil. Cambridge University Press. http://books.google.com/books?id=HJdaM325m8IC&pg=PA110&dq=Eusebio+de+Queiroz+Law&hl=en&ei=eHHqTfiBGant0gG0upiTAQ&sa=X&oi=book_result&ct=result&resnum=2&ved=0CC0Q6AEwAQ#v=onepage&q=Eusebio%20de%20Queiroz%20Law&f=false. Retrieved 4 June 2011.
- Loosemore, Jo (8 July 2008). "Sailing Against Slavery". BBC. http://www.bbc.co.uk/devon/content/articles/2007/03/20/abolition_navy_feature.shtml. Retrieved 12 January 2010.
- "The East African Slave Trade". BBC. Archived from the original on 7 December 2009. http://www.bbc.co.uk/worldservice/africa/features/storyofafrica/9chapter3.shtml. Retrieved 12 January 2010.
- "Slavery and Slave Redemption in the Sudan". Human Rights Watch. March 2002. http://www.hrw.org/backgrounder/africa/sudanupdate.htm. Retrieved 12 January 2010.
- "Millions 'forced into slavery'". BBC News. 27 May 2002. http://news.bbc.co.uk/2/hi/2010401.stm. Retrieved 12 January 2010.
- Dodson, Howard (2005). "Slavery in the Twenty-First Century". UN Chronicle. United Nations. http://www.un.org/Pubs/chronicle/2005/issue3/0305p28.html. Retrieved 12 January 2010.
- "Modern slavery". BBC World Service. http://www.bbc.co.uk/worldservice/specials/1458_abolition/page4.shtml. Retrieved 12 January 2010.
- Flynn, Daniel (1 December 2006). "Poverty, tradition shackle Mauritania's slaves". Reuters. http://www.alertnet.org/thenews/newsdesk/L01877550.htm. Retrieved 12 January 2010.
- "Mauritanian MPs pass slavery law". BBC News. 9 August 2007. Archived from the original on 6 January 2010. http://news.bbc.co.uk/2/hi/africa/6938032.stm. Retrieved 12 January 2010.
- Alley, Sabit A (17 March 2001). "War and Genocide in Sudan". iAbolish. http://www.iabolish.org/slavery_today/in_depth/sudan-genocide.html. Retrieved 12 January 2010.[dead link][unreliable source?]
- Coe, Erin. "The Lost Children of Sudan". NYU Livewire. http://journalism.nyu.edu/pubzone/livewire/archived/the_lost_children_of_sudan/. Retrieved 12 January 2010.[unreliable source?]
- Andersson, Hilary (11 February 2005). "Born to Be a Slave in Niger". BBC News. http://news.bbc.co.uk/1/hi/programmes/from_our_own_correspondent/4250709.stm. Retrieved 12 January 2010.
- Steeds, Oliver (3 June 2005). "The Shackles of Slavery in Niger". ABC News. http://abcnews.go.com/International/Story?id=813618&page=1. Retrieved 12 January 2010.
- "Rescued From Sex Slavery". CBS News. 23 February 2005. http://www.cbsnews.com/stories/2005/02/23/48hours/main675913.shtml.
- Rodney, Walter. How Europe underdeveloped Africa. London: Bogle-L'Ouverture Publications, 1972
- David Eltis Economic Growth and the Ending of the Transatlantic slave trade
- "Ideology versus the Tyranny of Paradigm: Historians and the Impact of the Atlantic Slave Trade on African Societies," by Joseph E. Inikori African Economic History. 1994.
- "African Holocaust Special". African Holocaust Society. http://www.africanholocaust.net/html_ah/holocaustspecial.htm. Retrieved 4 January 2007.
- Nunn, Nathan (February 200). "The Long-Term Effects of Africa's Slave Trades" (PDF). Quarterly Journal of Economics (Cambridge, MA: MIT Press) 123 (1): 139–1745. doi:10.1162/qjec.2008.123.1.139. http://www.economics.harvard.edu/faculty/nunn/files/empirical_slavery.pdf. Retrieved 10 April 2008.
- Fage, J.D. A History of Africa. Routledge, 4th edition, 2001. pg. 261
- Marx, K. "Chapter Thirty-One: Genesis of the Industrial Capitalist" Das Kapital: Volume 1, 1867.,
- Williams, Capitalism & Slavery (University of North Carolina Press, 1944), pp. 98–107, 169–177, et passim
- David Richardson, "The British Empire and the Atlantic Slave Trade, 1660–1807," in P.J. Marshall, ed. The Oxford History of the British Empire: Volume II: The Eighteenth Century (1998) pp 440–64
- Stanley L. Engerman. "The Slave Trade and British Capital Formation in the Eighteenth Century". http://www.jstor.org/stable/3113341?seq=13. Retrieved 26 April 2012.
- Richard Pares. "The Economic Factors in the History of the Empire". http://www.jstor.org/stable/2590147?origin=JSTOR-on-page. Retrieved 26 April 2012.
- J.R. Ward, "The British West Indies in the Age of Abolition," in P.J. Marshall, ed. The Oxford History of the British Empire: Volume II: The Eighteenth Century (1998) pp 415–39.
- Guillaume Daudin « Profitability of slave and long distance trading in context : the case of eighteenth century France », Journal of Economic History, vol. 64, n°1, 2004
- "Effects on Africa". "Ron Karenga". http://www.africawithin.com/karenga/ethics.htm.
- Eric Williams, Capitalism and Slavery, London 1972.
- Fage, J.D. A History of Africa (Routledge, 4th edition, 2001 ISBN 0-415-25247-4)
- Faragher, John Mack; Mari Jo Buhle, Daniel Czitrom, Susan H. Armitage (2004). Out of Many. Pearson Prentice Hall. p. 54. ISBN 0-13-182431-7.
- "The Peopling of Africa: A Geographic Interpretation".(Review): An article from: Population and Development Review [HTML] (Digital) by Tukufu Zuberi
- Edward Reynolds. Stand the Storm: a history of the Atlantic slave trade. London: Allison and Busby, 1985.
- Walter Rodney: How Europe Underdeveloped Africa, London 1973.
- Savage, Elizabeth (ed.), The Human Commodity: Perspectives on the Trans-Saharan Slave Trade, London 1992.
- Donald R. Wright, "History of Slavery and Africa", Online Encyclopedia, 2000.
- African Holocaust: The history of slavery in Africa
- Twentieth Century Solutions of the Abolition of Slavery
- The story of Africa: Slavery
- "The impact of the slave trade on Africa," Le Monde diplomatique
- "Ethiopia, Slavery and the League of Nations" Abyssinia/Ethiopia slavery and slaves trade | http://wpedia.goo.ne.jp/enwiki/African_slave_trade | 13 |
34 | The bold plan for an Apollo mission based on LOR held the promise of landing on the moon by 1969, but it presented many daunting technical difficulties. Before NASA could dare attempt any type of lunar landing, it had to learn a great deal more about the destination. Although no one believed that the moon was made of green cheese, some lunar theories of the early 1960s seemed equally fantastic. One theory suggested that the moon was covered by a layer of dust perhaps 50 feet thick. If this were true, no spacecraft would be able to safely land on or take off from the lunar surface. Another theory claimed that the moon's dust was not nearly so thick but that it possessed an electrostatic charge that would cause it to stick to the windows of the lunar landing vehicle, thus making it impossible for the astronauts to see out as they landed. Cornell University astronomer Thomas Gold warned that the moon might even be composed of a spongy material that would crumble upon impact.1
At Langley, Dr. Leonard Roberts, a British mathematician in Clint Brown's Theoretical Mechanics Division, pondered the riddle of the lunar surface and drew an equally pessimistic conclusion. Roberts speculated that because the moon was millions of years old and had been constantly bombarded without the protection of an atmosphere, its surface was most likely so soft that any vehicle attempting to land on it would sink and be buried as if it had landed in quicksand. After the president's commitment to a manned lunar landing in 1961, Roberts began an extensive three year research program to show just what would happen if an exhaust rocket blasted into a surface of very thick powdered sand. His analysis indicated that an incoming rocket would throw up a mountain of sand, thus creating a big rim all the way around the outside of the landed spacecraft. Once the spacecraft settled, this huge bordering volume of sand would collapse, completely engulf the spacecraft, and kill its occupants.2
Telescopes revealed little about the nature of the lunar surface. Not even the latest, most powerful optical instruments could see through the earth's atmosphere well enough to resolve the moon's detailed surface features. Even an object the size of a football stadium would not show up on a telescopic photograph, and enlarging the photograph would only increase the blur. To separate fact from fiction and obtain the necessary information about the craters, crevices, and jagged rocks on the lunar surface, NASA would have to send out automated probes to take a closer look.
The first of these probes took off for the moon in January 1962 as part of a NASA project known as Ranger. A small 800-pound spacecraft was to make a "hard landing," crashing to its destruction on the moon. Before Ranger crashed, however, its on-board multiple television camera payload was to send back close views of the surface -views far more detailed than any captured by a telescope. Sadly, the first six Ranger probes were not successful. Malfunctions of the booster or failures of the launch-vehicle guidance system plagued the first three attempts; malfunctions of the spacecraft itself hampered the fourth and fifth probes; and the primary experiment could not take place during the sixth Ranger attempt because the television equipment would not transmit. Although these incomplete missions did provide some extremely valuable high-resolution photographs, as well as some significant data on the performance of Ranger's systems, in total the highly publicized record of failures embarrassed NASA and demoralized the Ranger project managers at JPL. Fortunately, the last three Ranger flights in 1964 and 1965 were successful. These flights showed that a lunar landing was possible, but the site would have to be carefully chosen to avoid craters and big boulders.3
JPL managed a follow-on project to Ranger known as Surveyor. Despite failures and serious schedule delays, between May 1966 and January 1968, six Surveyor spacecraft made successful soft landings at predetermined points on the lunar surface. From the touchdown dynamics, surface-bearing strength measurements, and eye-level television scanning of the local surface conditions, NASA learned that the moon could easily support the impact and the weight of a small lander. Originally, NASA also planned for (and Congress had authorized) a second type of Surveyor spacecraft, which instead of making a soft landing on the moon, was to be equipped for high-resolution stereoscopic film photography of the moon's surface from lunar orbit and for instrumented measurements of the lunar environment. However, this second Surveyor or "Surveyor Orbiter" did not materialize. The staff and facilities of JPL were already overburdened with the responsibilities for Ranger and "Surveyor Lander"; they simply could not take on another major spaceflight project.4
In 1963, NASA scrapped its plans for a Surveyor Orbiter and turned its attention to a lunar orbiter project that would not use the Surveyor spacecraft system or the Surveyor launch vehicle, Centaur. Lunar Orbiter would have a new spacecraft and use the Atlas-Agena D to launch it into space. Unlike the preceding unmanned lunar probes, which were originally designed for general scientific study, Lunar Orbiter was conceived after a manned lunar landing became a national commitment. The project goal from the start was to support the Apollo mission. Specifically, Lunar Orbiter was designed to provide information on the lunar surface conditions most relevant to a spacecraft landing. This meant, among other things, that its camera had to be sensitive enough to capture subtle slopes and minor protuberances and depressions over a broad area of the moon's front side. As an early working group on the requirements of the lunar photographic mission had determined, Lunar Orbiter had to allow the identification of 45-meter objects over the entire facing surface of the moon, 4.5-meter objects in the "Apollo zone of interest," and 1.2-meter objects in all the proposed landing areas.5
Five Lunar Orbiter missions took place. The first launch occurred in August 1966 within two months of the initial target date. The next four Lunar Orbiters were launched on schedule; the final mission was completed in August 1967, barely a year after the first launch. NASA had planned five flights because mission reliability studies had indicated that five might be necessary to achieve even one success. However, all five Lunar Orbiters were successful, and the prime objective of the project, which was to photograph in detail all the proposed landing sites, was met in three missions. This meant that the last two flights could be devoted to photographic exploration of the rest of the lunar surface for more general scientific purposes. The final cost of the program was not slight: it totaled $163 million, which was more than twice the original estimate of $77 million. That increase, however, compares favorably with the escalation in the price of similar projects, such as Surveyor, which had an estimated cost of $125 million and a final cost of $469 million.
In retrospect, Lunar Orbiter must be, and rightfully has been, regarded as an unqualified success. For the people and institutions responsible, the project proved to be an overwhelmingly positive learning experience on which greater capabilities and ambitions were built. For both the prime contractor, the Boeing Company, a world leader in the building of....
.... airplanes, and the project manager, Langley Research Center, a premier aeronautics laboratory, involvement in Lunar Orbiter was a turning point. The successful execution of a risky enterprise became proof positive that they were more than capable of moving into the new world of deep space. For many observers as well as for the people who worked on the project, Lunar Orbiter quickly became a model of how to handle a program of space exploration its successful progress demonstrated how a clear and discrete objective, strong leadership, and positive person-to-person communication skills can keep a project on track from start to finish.6
Many people inside the American space science community believed that neither Boeing nor Langley was capable of managing a project like Lunar Orbiter or of supporting the integration of first-rate scientific experiments and space missions. After NASA headquarters announced in the summer of 1963 that Langley would manage Lunar Orbiter, more than one space scientist was upset. Dr. Harold C. Urey, a prominent scientist from the University of California at San Diego, wrote a letter to Administrator James Webb asking him, "How in the world could the Langley Research Center, which is nothing more than a bunch of plumbers, manage this scientific program to the moon?"7
Urey's questioning of Langley's competency was part of an unfolding debate over the proper place of general scientific objectives within NASA's spaceflight programs. The U.S. astrophysics community and Dr. Homer E. Newell's Office of Space Sciences at NASA headquarters wanted "quality science" experiments incorporated into every space mission, but this caused problems. Once the commitment had been made to a lunar landing mission, NASA had to decide which was more important: gathering broad scientific information or obtaining data required for accomplishing the lunar landing mission. Ideally, both goals could be incorporated in a project without one compromising the other, but when that seemed impossible, one of the two had to be given priority. The requirements of the manned mission usually won out. For Ranger and Surveyor, projects involving dozens of outside scientists and the large and sophisticated Space Science Division at JPL, that meant that some of the experiments would turn out to be less extensive than the space scientists wanted.8 For Lunar Orbiter, a project involving only a few astrogeologists at the U.S. Geological Survey and a very few space scientists at Langley, it meant, ironically, that the primary goal of serving Apollo would be achieved so quickly that general scientific objectives could be included in its last two missions.
Langley management had entered the fray between science and project engineering during the planning for Project Ranger. At the first Senior Council meeting of the Office of Space Sciences (soon to be renamed the Office of Space Sciences and Applications [OSSA]) held at NASA headquarters on 7 June 1962, Langley Associate Director Charles Donlan had questioned the priority of a scientific agenda for the agency's proposed unmanned lunar probes because a national commitment had since been made to a manned lunar landing. The initial requirements for the probes had been set long before Kennedy's announcement, and therefore, Donlan felt NASA needed to rethink them. Based on his experience at Langley and with Gilruth's STG, Donlan knew that the space science people could be "rather unbending" about adjusting experiments to obtain "scientific data which would assist the manned program." What needed to be done now, he felt, was to turn the attention of the scientists to exploration that would have more direct applications to the Apollo lunar landing program.9
Donlan was distressed specifically by the Office of Space Sciences' recent rejection of a lunar surface experiment proposed by a penetrometer feasibility study group at Langley. This small group, consisting of half a dozen people from the Dynamic Loads and Instrument Research divisions, had devised a spherical projectile, dubbed "Moonball," that was equipped with accelerometers capable of transmitting acceleration versus time signatures during impact with the lunar surface. With these data, researchers could determine the hardness, texture, and load-bearing strength of possible lunar landing sites. The group recommended that Moonball be flown as part of the follow-on to Ranger.10
A successful landing of an intact payload required that the landing loads not exceed the structural capabilities of the vehicle and that the vehicle make its landing in some tenable position so it could take off again. Both of these requirements demanded a knowledge of basic physical properties of the surface material, particularly data demonstrating its hardness or resistance to penetration. In the early 1960s, these properties were still unknown, and the Langley penetrometer feasibility study group wanted to identify them. Without the information, any design of Apollo's lunar lander would have to be based on assumed surface characteristics.11
In the opinion of the Langley penetrometer group, its lunar surface hardness experiment would be of "general scientific interest," but it would, more importantly, provide "timely engineering information important to the design of the Apollo manned lunar landing vehicle." 12 Experts at JPL, however, questioned whether surface hardness was an important criterion for any experiment and argued that "the determination of the terrain was more important, particularly for a horizontal landing.''13 In the end, the Office of Space Sciences rejected the Langley idea in favor of making further seismometer experiments, which might tell scientists something basic about the origins of the moon and its astrogeological history.*
For engineer Donlan, representing a research organization like Langley dominated by engineers and by their quest for practical solutions to applied problems, this rejection seemed a mistake. The issue came down to what NASA needed to know now. That might have been science before Kennedy's commitment, but it definitely was not science after it. In Donlan's view, Langley's rejected approach to lunar impact studies had been the correct one. The consensus at the first Senior Council meeting, however, was that "pure science experiments will be able to provide the engineering answers for Project Apollo." 14
Over the next few years, the engineering requirements for Apollo would win out almost totally. As historian R. Cargill Hall explains in his story of Project Ranger, a "melding" of interests occurred between the Office....
....of Space Sciences and the Office of Manned Space Flight followed by a virtually complete subordination of the scientific priorities originally built into the unmanned projects. Those priorities, as important as they were, "quite simply did not rate" with Apollo in importance.15
The sensitive camera eyes of the Lunar Orbiter spacecraft carried out a vital reconnaissance mission in support of the Apollo program. Although NASA designed the project to provide scientists with quantitative information about the moon's gravitational field and the dangers of micrometeorites and solar radiation in the vicinity of the lunar environment, the primary objective of Lunar Orbiter was to fly over and photograph the best landing sites for the Apollo spacecraft. NASA suspected that it might have enough information about the lunar terrain to land astronauts safely without the detailed photographic mosaics of the lunar surface compiled from the orbiter flights, but certainly landing sites could be pinpointed more accurately with the help of high-resolution photographic maps Lunar Orbiter would even help to train the astronauts for visual recognition of the lunar topography and for last-second maneuvering above it before touchdown.
Langley had never managed a deep-space flight project before, and Director Floyd Thompson was not sure that he wanted to take on the burden of responsibility when Oran Nicks, the young director of lunar and planetary programs in Homer Newell's Office of Space Sciences, came to him with the idea early in 1963. Along with Newell's deputy, Edgar M. Cortright, Nicks was the driving force behind the orbiter mission at NASA headquarters. Cortright, however, first favored giving the project to JPL and using Surveyor Orbiter and the Hughes Aircraft Company, which was the prime contractor for Surveyor Lander. Nicks disagreed with this plan and worked to persuade Cortright and others that he was right. In Nicks' judgment, JPL had more than it could handle with Ranger and Surveyor Lander and should not have anything else "put on its plate," certainly not anything as large as the Lunar Orbiter project. NASA Langley, on the other hand, besides having a reputation for being able to handle a variety of aerospace tasks, had just lost the STG to Houston and so, Nicks thought, would be eager to take on the new challenge of a lunar orbiter project. Nicks worked to persuade Cortright that distributing responsibilities and operational programs among the NASA field centers would be "a prudent management decision." NASA needed balance among its research centers. To ensure NASA's future in space, headquarters must assign to all its centers challenging endeavors that would stimulate the development of "new and varied capabilities."16
Cortright was persuaded and gave Nicks permission to approach Floyd Thompson.** This Nicks did on 2 January 1963, during a Senior Council meeting of the Office of Space Sciences at Cape Canaveral. Nicks asked Thompson whether Langley "would be willing to study the feasibility of undertaking a lunar photography experiment," and Thompson answered cautiously that he would ask his staff to consider the idea.17
The historical record does not tell us much about Thompson's personal thoughts regarding taking on Lunar Orbiter. But one can infer from the evidence that Thompson had mixed feelings, not unlike those he experienced about supporting the STG. The Langley director would not only give Nicks a less than straightforward answer to his question but also would think about the offer long and hard before committing the center. Thompson invited several trusted staff members to share their feelings about assuming responsibility for the project. For instance, he went to Clint Brown, by then one of his three assistant directors for research, and asked him what he thought Langley should do. Brown told him emphatically that he did not think Langley should take on Lunar Orbiter. An automated deep-space project would be difficult to manage successfully. The Lunar Orbiter would be completely different from the Ranger and Surveyor spacecraft and being a new design, would no doubt encounter many unforeseen problems. Even if it were done to everyone's satisfaction -and the proposed schedule for the first launches sounded extremely tight -Langley would probably handicap its functional research divisions to give the project all the support that it would need. Projects devoured resources. Langley staff had learned this firsthand from its experience with the STG. Most of the work for Lunar Orbiter would rest in the management of contracts at industrial plants and in the direction of launch and mission control operations at Cape Canaveral and Pasadena. Brown, for one, did not want to be involved.18
But Thompson decided, in what Brown now calls his director's "greater wisdom," that the center should accept the job of managing the project. Some researchers in Brown's own division had been proposing a Langley-directed photographic mission to the moon for some time, and Thompson, too, was excited by the prospect.19 Furthermore, the revamped Lunar Orbiter was not going to be a space mission seeking general scientific knowledge about the moon. It was going to be a mission directly in support of Apollo, and this meant that engineering requirements would be primary. Langley staff preferred that practical orientation; their past work often resembled projects on a smaller scale. Whether the "greater wisdom" stemmed from Thompson's own powers of judgment is still not certain. Some informed Langley veterans, notably Brown, feel that Thompson must have also received some strongly stated directive from NASA headquarters that said Langley had no choice but to take on the project.
Whatever was the case in the beginning, Langley management soon welcomed Lunar Orbiter. It was a chance to prove that they could manage a major undertaking. Floyd Thompson personally oversaw many aspects of the project and for more than four years did whatever he could to make sure that Langley's functional divisions supported it fully. Through most of this period, he would meet every Wednesday morning with the top people in the project office to hear about the progress of their work and offer his own ideas. As one staff member recalls, "I enjoyed these meetings thoroughly. [Thompson was] the most outstanding guy I've ever met, a tremendously smart man who knew what to do and when to do it."20
Throughout the early months of 1963, Langley worked with its counterparts at NASA headquarters to establish a solid and cooperative working relationship for Lunar Orbiter. The center began to draw up preliminary specifications for a lightweight orbiter spacecraft and for the vehicle that would launch it (already thought to be the Atlas-Agena D). While Langley personnel were busy with that, TRW's Space Technologies Laboratories (STL) of Redondo Beach, California, was conducting a parallel study of a lunar orbiter photographic spacecraft under contract to NASA headquarters. Representatives from STL reported on this work at meetings at Langley on 25 February and 5 March 1963. Langley researchers reviewed the contractor's assessment and found that STL's estimates of the chances for mission success closely matched their own. If five missions were attempted, the probability of achieving one success was 93 percent. The probability of achieving two was 81 percent. Both studies confirmed that a lunar orbiter system using existing hardware would be able to photograph a landed Surveyor and would thus be able to verify the conditions of that possible Apollo landing site. The independent findings concluded that the Lunar Orbiter project could be done successfully and should be done quickly because its contribution to the Apollo program would be great. 21
With the exception of its involvement in the X-series research airplane programs at Muroc, Langley had not managed a major project during the period of the NACA. As a NASA center, Langley would have to learn to manage projects that involved contractors, subcontractors, other NASA facilities, and headquarters -a tall order for an organization used to doing all its work in-house with little outside interference. Only three major projects were assigned to Langley in the early 1960s: Scout, in 1960; Fire, in 1961; and Lunar Orbiter, in 1963. Project Mercury and Little Joe, although heavily supported by Langley, had been managed by the independent STG, and Project Echo, although managed by Langley for a while, eventually was given to Goddard to oversee.
To prepare for Lunar Orbiter in early 1963, Langley management reviewed what the center had done to initiate the already operating Scout and Fire projects. It also tried to learn from JPL about inaugurating paperwork for, and subsequent management of, Projects Ranger and Surveyor. After these reviews, Langley felt ready to prepare the formal documents required by NASA for the start-up of the project.22
As Langley prepared for Lunar Orbiter, NASA's policies and procedures for project management were changing. In October 1962, spurred on by its new top man, James Webb, the agency had begun to implement a series of structural changes in its overall organization. These were designed to improve relations between headquarters and the field centers, an area of fundamental concern. Instead of managing the field centers through the Office of Programs, as had been the case, NASA was moving them under the command of the headquarters program directors. For Langley, this meant direct lines of communication with the OART and the OSSA. By the end of 1963, a new organizational framework was in place that allowed for more effective management of NASA projects.
In early March 1963, as part of Webb's reform, NASA headquarters issued an updated version of General Management Instruction 4-1-1. This revised document established formal guidelines for the planning and management of a project. Every project was supposed to pass through four preliminary stages: (1) Project Initiation, (2) Project Approval, (3) Project Implementation, and (4) Organization for Project Management.23 Each step required the submission of a formal document for headquarters' approval.
From the beginning, everyone involved with Lunar Orbiter realized that it had to be a fast-track project. In order to help Apollo, everything about it had to be initiated quickly and without too much concern about the letter of the law in the written procedures. Consequently, although no step was to be taken without first securing approval for the preceding step, Langley initiated the paperwork for all four project stages at the same time. This same no-time-to-lose attitude ruled the schedule for project development. All aspects had to be developed concurrently. Launch facilities had to be planned at the same time that the design of the spacecraft started. The photographic, micrometeoroid, and selenodetic experiments had to be prepared even before the mission operations plan was complete. Everything proceeded in parallel: the development of the spacecraft, the mission design, the operational plan and preparation of ground equipment, the creation of computer programs, as well as a testing plan. About this parallel development, Donald H. Ward, a key member of Langley's Lunar Orbiter project team, remarked, "Sometimes this causes undoing some mistakes, but it gets to the end product a lot faster than a serial operation where you design the spacecraft and then the facilities to support it."24 Using the all-at-once approach, Langley put Lunar Orbiter in orbit around the moon only 27 months after signing with the contractor.
On 11 September 1963, Director Floyd Thompson formally established the Lunar Orbiter Project Office (LOPO) at Langley, a lean organization of just a few people who had been at work on Lunar Orbiter since May. Thompson named Clifford H. Nelson as the project manager. An NACA veteran and head of the Measurements Research Branch of IRD, Nelson was an extremely bright engineer. He had served as project engineer on several flight research programs, and Thompson believed that he showed great promise as a technical manager. He worked well with others, and Thompson knew that skill in interpersonal relations would be essential in managing Lunar Orbiter because so much of the work would entail interacting with contractors.
To help Nelson, Thompson originally reassigned eight people to LOPO: engineers Israel Taback, Robert Girouard, William I. Watson, Gerald Brewer, John B. Graham, Edmund A. Brummer, financial accountant Robert Fairburn, and secretary Anna Plott. This group was far smaller than the staff of 100 originally estimated for this office. The most important technical minds brought in to participate came from either IRD or from the Applied Materials and Physics Division, which was the old PARD. Taback was the experienced and sage head of the Navigation and Guidance Branch of IRD; Brummer, an expert in telemetry, also came from IRD; and two new Langley men, Graham and Watson, were brought in to look over the integration of mission operations and spacecraft assembly for the project. A little later IRD's talented Bill Boyer also joined the group as flight operations manager, as did the outstanding mission analyst Norman L. Crabill, who had just finished working on Project Echo. All four of the NACA veterans were serving as branch heads at the time of their assignment to LOPO. This is significant given that individuals at that level of authority and experience are often too entrenched and concerned about further career development to take a temporary assignment on a high-risk project. The LOPO staff set up an office in a room in the large 16-Foot Transonic Tunnel building in the Langley West Area.
When writing the Request for Proposals, Nelson, Taback, and the others involved could only afford the time necessary to prepare a brief document, merely a few pages long, that sketched out some of the detailed requirements. As Israel Taback remembers, even before the project office was established, he and a few fellow members of what would become LOPO had already talked extensively with the potential contractors. Taback explains, "Our idea was that they would be coming back to us [with details]. So it wasn't like we were going out cold, with a brand new program."25
Langley did need to provide one critical detail in the request: the means for stabilizing the spacecraft in lunar orbit. Taback recalls that an "enormous difference" arose between Langley and NASA headquarters over this issue. The argument was about whether the Request for Proposals should require that the contractors produce a rotating satellite known as a "spinner." The staff of the OSSA preferred a spinner based on STL's previous study of Lunar Orbiter requirements. However, Langley's Lunar Orbiter staff doubted the wisdom of specifying the means of stabilization in the Request for Proposals. They wished to keep the door open to other, perhaps better, ways of stabilizing the vehicle for photography.
The goal of the project, after all, was to take the best possible high-resolution pictures of the moon's surface. To do that, NASA needed to create the best possible orbital platform for the spacecraft's sophisticated camera equipment, whatever that turned out to be. From their preliminary analysis and conversations about mission requirements, Taback, Nelson, and others in LOPO felt that taking these pictures from a three-axis (yaw, pitch, and roll), attitude-stabilized device would be easier than taking them from a spinner. A spinner would cause distortions of the image because of the rotation of the vehicle. Langley's John F. Newcomb of the Aero Space Mechanics Division (and eventual member of LOPO) had calculated that this distortion would destroy the resolution and thus seriously compromise the overall quality of the pictures. This was a compromise that the people at Langley quickly decided they could not live with. Thus, for sound technical reasons, Langley insisted that the design of the orbiter be kept an open matter and not be specified in the Request for Proposals. Even if Langley's engineers were wrong and a properly designed spinner would be most effective, the sensible approach was to entertain all the ideas the aerospace industry could come up with before choosing a design.26
For several weeks in the summer of 1963, headquarters tried to resist the Langley position. Preliminary studies by both STL for the OSSA and by Bell Communications (BellComm) for the Office of Manned Space Flight indicated that a rotating spacecraft using a spin-scan film camera similar to the one developed by the Rand Corporation in 1958 for an air force satellite reconnaissance system ( "spy in the sky" ) would work well for Lunar Orbiter. Such a spinner would be less complicated and less costly than the three-axis-stabilized spacecraft preferred by Langley.27
But Langley staff would not cave in on an issue so fundamental to the project's success. Eventually Newell, Cortright, Nicks, and Scherer in the OSSA offered a compromise that Langley could accept: the Request for Proposals could state that "if bidders could offer approaches which differed from the established specifications but which would result in substantial gains in the probability of mission success, reliability, schedule, and economy," then NASA most certainly invited them to submit those alternatives. The request would also emphasize that NASA wanted a lunar orbiter that was built from as much off-the shelf hardware as possible. The development of many new technological systems would require time that Langley did not have.28
Langley and headquarters had other differences of opinion about the request. For example, a serious problem arose over the nature of the contract. Langley's chief procurement officer, Sherwood Butler, took the conservative position that a traditional cost-plus-a-fixed-fee contract would be best in a project in which several unknown development problems were bound to arise. With this kind of contract, NASA would pay the contractor for all actual costs plus a sum of money fixed by the contract negotiations as a reasonable profit.
NASA headquarters, on the other hand, felt that some attractive financial incentives should be built into the contract. Although unusual up to this point in NASA history, headquarters believed that an incentives contract would be best for Lunar Orbiter. Such a contract would assure that the contractor would do everything possible to solve all the problems encountered and make sure that the project worked. The incentives could be written up in such a way that if, for instance, the contractor lost money on any one Lunar Orbiter mission, the loss could be recouped with a handsome profit on the other missions. The efficacy of a cost-plus-incentives contract rested in the solid premise that nothing motivated a contractor more than making money. NASA headquarters apparently understood this better than Langley's procurement officer who wanted to keep tight fiscal control over the project and did not want to do the hairsplitting that often came with evaluating whether the incentive clauses had been met.29
On the matter of incentives, Langley's LOPO engineers sided against their own man and with NASA headquarters. They, too, thought that incentives were the best way to do business with a contractor -as well as the best way to illustrate the urgency that NASA attached to Lunar Orbiter.30 The only thing that bothered them was the vagueness of the incentives being discussed. When Director Floyd Thompson understood that his engineers really wanted to take the side of headquarters on this issue, he quickly concurred. He insisted only on three things: the incentives had to be based on clear stipulations tied to cost, delivery, and performance, with penalties for deadline overruns; the contract had to be fully negotiated and signed before Langley started working with any contractor (in other words, work could not start under a letter of intent); and all bidding had to be competitive. Thompson worried that the OSSA might be biased in favor of STL as the prime contractor because of STL's prior study of the requirements of lunar orbiter systems.31
In mid-August 1963, with these problems worked out with headquarters, Langley finalized the Request for Proposals and associated Statement of Work, which outlined specifications, and delivered both to Captain Lee R. Scherer, Lunar Orbiter's program manager at NASA headquarters, for presentation to Ed Cortright and his deputy Oran Nicks. The documents stated explicitly that the main mission of Lunar Orbiter was "the acquisition of photographic data of high and medium resolution for selection of suitable Apollo and Surveyor landing sites." The request set out detailed criteria for such things as identifying "cones" (planar features at right angles to a flat surface), "slopes" (circular areas inclined with respect to the plane perpendicular to local gravity), and other subtle aspects of the lunar surface. Obtaining information about the size and shape of the moon and about the lunar gravitational field was deemed less important. By omitting a detailed description of the secondary objectives in the request, Langley made clear that "under no circumstances" could anything "be allowed to dilute the major photo reconnaissance mission."32 The urgency of the national commitment to a manned lunar landing mission was the force driving Lunar Orbiter. Langley wanted no confusion on that point.
Cliff Nelson and LOPO moved quickly in September 1963 to create a Source Evaluation Board that would possess the technical expertise and good judgment to help NASA choose wisely from among the industrial firms bidding for Lunar Orbiter. A large board of reviewers (comprising more than 80 evaluators and consultants from NASA centers and other aerospace organizations) was divided into groups to evaluate the technical feasibility, cost, contract management concepts, business operations, and other critical aspects of the proposals. One group, the so-called Scientists' Panel, judged the suitability of the proposed spacecraft for providing valuable information to the scientific community after the photographic mission had been completed. Langley's two representatives on the Scientists' Panel were Clint Brown and Dr. Samuel Katzoff, an extremely insightful engineering analyst, 27-year Langley veteran, and assistant chief of the Applied Materials and Physics Division.
Although the opinions of all the knowledgeable outsiders were taken .seriously, Langley intended to make the decision.33 Chairing the Source Evaluation Board was Eugene Draley, one of Floyd Thompson's assistant directors. When the board finished interviewing all the bidders, hearing their oral presentations, and tallying the results of its scoring of the proposals (a possible 70 points for technical merit and 30 points for business management), it was to present a formal recommendation to Thompson. He in turn would pass on the findings with comments to Homer Newell's office in Washington.
Five major aerospace firms submitted proposals for the Lunar Orbiter contract. Three were California firms: STL in Redondo Beach, Lockheed Missiles and Space Company of Sunnyvale, and Hughes Aircraft Company of Los Angeles. The Martin Company of Baltimore and the Boeing Company of Seattle were the other two bidders.34
Three of the five proposals were excellent. Hughes had been developing an ingenious spin-stabilization system for geosynchronous communication satellites, which helped the company to submit an impressive proposal for a rotating vehicle. With Hughes's record in spacecraft design and fabrication, the Source Evaluation Board gave Hughes serious consideration. STL also submitted a fine proposal for a spin-stabilized rotator. This came as no surprise, of course, given STL's prior work for Surveyor as well as its prior contractor studies on lunar orbiter systems for NASA headquarters.
The third outstanding proposal -entitled "ACLOPS" (Agena-Class Lunar Orbiter Project) -was Boeing's. The well-known airplane manufacturer had not been among the companies originally invited to bid on Lunar Orbiter and was not recognized as the most logical of contenders. However, Boeing recently had successfully completed the Bomarc missile program and was anxious to become involved with the civilian space program, especially now that the DOD was canceling Dyna-Soar, an air force project for the development of an experimental X-20 aerospace plane. This cancellation released several highly qualified U.S. Air Force personnel, who were still working at Boeing, to support a new Boeing undertaking in space. Company representatives had visited Langley to discuss Lunar Orbiter, and Langley engineers had been so excited by what they had heard that they had pestered Thompson to persuade Seamans to extend an invitation to Boeing to join the bidding. The proposals from Martin, a newcomer in the business of automated space probes, and Lockheed, a company with years of experience handling the Agena space vehicle for the air force, were also quite satisfactory. In the opinion of the Source Evaluation Board, however, the proposals from Martin and Lockheed were not as strong as those from Boeing and Hughes.
The LOPO staff and the Langley representatives decided early in the evaluation that they wanted Boeing to be selected as the contractor; on behalf of the technical review team, Israel Taback had made this preference known both in private conversations with, and formal presentations to, the Source Evaluation Board. Boeing was Langley's choice because it proposed a three axis stabilized spacecraft rather than a spinner. For attitude reference in orbit, the spacecraft would use an optical sensor similar to the one that was being planned for use on the Mariner C spacecraft, which fixed on the star Canopus.
An attitude stabilized orbiter eliminated the need for a focal-length spin camera. This type of photographic system, first conceived by Merton E. Davies of the Rand Corporation in 1958, could compensate for the distortions caused by a rotating spacecraft but would require extensive development. In the Boeing proposal, Lunar Orbiter would carry a photo subsystem designed by Eastman Kodak and used on DOD spy satellites.35 This subsystem worked automatically and with the precision of a Swiss watch. It employed two lenses that took pictures simultaneously on a roll of 70-millimeter aerial film. If one lens failed, the other still worked. One lens had a focal length of 610 millimeters (24 inches) and could take pictures from an altitude of 46 kilometers (28.5 miles) with a high resolution for limited-area coverage of approximately 1 meter. The other, which had a focal length of about 80 millimeters (3 inches), could take pictures with a medium resolution of approximately 8 meters for wide coverage of the lunar surface. The film would be developed on board the spacecraft using the proven Eastman Kodak "Bimat" method. The film would be in contact with a web containing a single solution dry processing chemical, which eliminated the need to use wet chemicals. Developed automatically and wound onto a storage spool, the processed film could then be "read out" and transmitted by the spacecraft's communications subsystem to receiving stations of JPL's worldwide Deep Space Network, which was developed for communication with spacefaring vehicles destined for the moon and beyond. 36
How Boeing had the good sense to propose an attitude-stabilized platform based on the Eastman Kodak camera, rather than to propose a rotator with a yet-to be developed camera is not totally clear. Langley engineers had conversed with representatives of all the interested bidders, so Boeing's people might possibly have picked up on Langley's concerns about the quality of photographs from spinners. The other bidders, especially STL and Hughes, with their expertise in spin-stabilized spacecraft, might also have picked up on those concerns but were too confident in the type of rotationally stabilized system they had been working on to change course in midstream.
Furthermore, Boeing had been working closely with RCA, which for a time was also thinking about submitting a proposal for Lunar Orbiter. RCA's idea was a lightweight (200-kilogram), three axis, attitude stabilized, and camera-bearing payload that could be injected into lunar orbit as part of a Ranger-type probe. A lunar orbiter study group, chaired by Lee Scherer.....
....at NASA headquarters, had evaluated RCA's approach in October 1962, however, and found it lacking. It was too expensive ($20.4 million for flying only three spacecraft), and its proposed vidicon television unit could not cover the lunar surface either in the detail or the wide panoramas NASA wanted.37
Boeing knew all about this rejected RCA approach. After talking to Langley's engineers, the company shrewdly decided to stay with an attitude stabilized orbiter but to dump the use of the inadequate vidicon television. Boeing replaced the television system with an instrument with a proven track record in planetary reconnaissance photography: the Eastman Kodak spy camera.38
On 20 December 1963, two weeks after the Source Evaluation Board made its formal recommendation to Administrator James Webb in Washington, NASA announced that it would be negotiating with Boeing as prime contractor for the Lunar Orbiter project. Along with the excellence of its proposed spacecraft design and Kodak camera, NASA singled out the strength of Boeing's commitment to the project and its corporate capabilities to.....
....complete it on schedule without relying on many subcontractors. Still, the choice was a bit ironic. Only 14 months earlier, the Scherer study group had rejected RCA's approach in favor of a study of a spin-stabilized spacecraft proposed by STL. Now Boeing had outmaneuvered its competition by proposing a spacecraft that incorporated essential features of the rejected RCA concept and almost none from the STL's previously accepted one.
Boeing won the contract even though it asked for considerably more money than any of the other bidders. The lowest bid, from Hughes, was $41,495,339, less than half of Boeing's $83,562,199, a figure that would quickly rise when the work started. Not surprisingly, NASA faced some congressional criticism and had to defend its choice. The agency justified its selection by referring confidently to what Boeing alone proposed to do to ensure protection of Lunar Orbiter's photographic film from the hazards of solar radiation.39
This was a technical detail that deeply concerned LOPO. Experiments conducted by Boeing and by Dr. Trutz Foelsche, a Langley scientist in the Space Mechanics (formerly Theoretical Mechanics) Division who specialized in the study of space radiation effects, suggested that even small doses of radiation from solar flares could fog ordinary high-speed photographic film. This would be true especially in the case of an instrumented probe like Lunar Orbiter, which had thin exterior vehicular shielding. Even if the thickness of the shielding around the film was increased tenfold (from 1 g/cm2 to 10 g/cm2), Foelsche judged that high-speed film would not make it through a significant solar-particle event without serious damage.40 Thus,.....
.....something extraordinary had to be done to protect the high-speed film. A better solution was not to use high-speed film at all.
As NASA explained successfully to its critics, the other bidders for the Lunar Orbiter contract relied on high-speed film and faster shutter speeds for their on-board photographic subsystems. Only Boeing did not. When delegates from STL, Hughes, Martin, and Lockheed were asked at a bidders' briefing in November 1963 about what would happen to their film if a solar event occurred during an orbiter mission, they all had to admit that the film would be damaged seriously. Only Boeing could claim otherwise. Even with minimal shielding, the more insensitive, low-speed film used by the Kodak camera would not be fogged by high-energy radiation, not even if the spacecraft moved through the Van Allen radiation belts.41 This, indeed, proved to be the case. During the third mission of Lunar Orbiter in February 1967, a solar flare with a high amount of optical activity did occur, but the film passed through it unspoiled.42
Negotiations with Boeing did not take long. Formal negotiations began on 17 March 1964, and ended just four days later. On 7 May Administrator Webb signed the document that made Lunar Orbiter an official NASA commitment. Hopes were high. But in the cynical months of 1964, with Ranger's setbacks still making headlines and critics still faulting NASA for failing to match Soviet achievements in space, everyone doubted whether Lunar Orbiter would be ready for its first scheduled flight to the moon in just two years.
Large projects are run by only a handful of people. Four or five key individuals delegate jobs and responsibilities to others. This was certainly true for Lunar Orbiter. From start to finish, Langley's LOPO remained a small organization; its original nucleus of 9 staff members never grew any larger than 50 professionals. Langley management knew that keeping LOPO's staff small meant fewer people in need of positions when the project ended. If all the positions were built into a large project office, many careers would be out on a limb; a much safer organizational method was for a small project office to draw people from other research and technical divisions to assist the project as needed.43
In the case of Lunar Orbiter, four men ran the project: Cliff Nelson, the project manager; Israel Taback, who was in charge of all activities leading to the production and testing of the spacecraft; Bill Boyer, who was responsible for planning and integrating launch and flight operations; and James V. Martin, the assistant project manager. Nelson had accepted the assignment with Thompson's assurance that he would be given wide latitude in choosing the men and women he wanted to work with him in the project office. As a result, virtually all of his top people were hand-picked.
The one significant exception was his chief assistant, Jim Martin. In September 1964, the Langley assistant director responsible for the project office, Gene Draley, brought in Martin to help Nelson cope with some of the stickier details of Lunar Orbiter's management. A senior manager in charge of Republic Aviation's space systems requirements, Martin had a tremendous ability for anticipating business management problems and plenty of experience taking care of them. Furthermore, he was a well-organized and skillful executive who could make schedules, set due dates, and closely track the progress of the contractors and subcontractors. This "paper" management of a major project was troublesome for Cliff Nelson, a quiet people-oriented person. Draley knew about taskmaster Martin from Republic's involvement in Project Fire and was hopeful that Martin's acerbity and business-mindedness would complement Nelson's good-heartedness and greater technical depth, especially in dealings with contractors.
Because Cliff Nelson and Jim Martin were so entirely opposite in personality, they did occasionally clash, which caused a few internal problems in LOPO. On the whole, however, the alliance worked quite well, although it was forced by Langley management. Nelson generally oversaw the whole endeavor and made sure that everybody worked together as a team. For....
....the monitoring of the day-to-day progress of the project's many operations, Nelson relied on the dynamic Martin. For example, when problems arose with the motion-compensation apparatus for the Kodak camera, Martin went to the contractor's plant to assess the situation and decided that its management was not placing enough emphasis on following a schedule. Martin acted tough, pounded on the table, and made the contractor put workable schedules together quickly. When gentler persuasion was called for or subtler interpersonal relationships were involved, Nelson was the person for the job. Martin, who was technically competent but not as technically talented as Nelson, also deferred to the project manager when a decision required particularly complex engineering analysis. Thus, the two men worked together for the overall betterment of Lunar Orbiter.44
Placing an excellent person with just the right specialization in just the right job was one of the most important elements behind the success of Lunar Orbiter, and for this eminently sensible approach to project management, Cliff Nelson and Floyd Thompson deserve the lion's share of credit. Both men cultivated a management style that emphasized direct dealings with people and often ignored formal organizational channels. Both stressed the importance of teamwork and would not tolerate any individual, however talented, willfully undermining the esprit de corps. Before filling any position in the project office, Nelson gave the selection much thought. He questioned whether the people under consideration were Compatible with others already in his project organization. He wanted to know whether candidates were goal-oriented -willing to do whatever was necessary (working overtime or traveling) to complete the project.45 Because Langley possessed so many employees who had been working at the center for many years, the track record of most people was either well known or easy to ascertain. Given the outstanding performance of Lunar Orbiter and the testimonies about an exceptionally healthy work environment in the project office, Nelson did an excellent job predicting who would make a productive member of the project team.46
Considering Langley's historic emphasis on fundamental applied aeronautical research, it might seem surprising that Langley scientists and engineers did not try to hide inside the dark return passage of a wind tunnel rather than be diverted into a spaceflight project like Lunar Orbiter. As has been discussed, some researchers at Langley (and agency-wide) objected to and resisted involvement with project work. The Surveyor project at JPL had suffered from staff members' reluctance to leave their own specialties to work on a space project. However, by the early 1960s the enthusiasm for spaceflight ran so rampant that it was not hard to staff a space project office. All the individuals who joined LOPO at Langley came enthusiastically; otherwise Cliff Nelson would not have had them. Israel Taback, who had been running the Communications and Control Branch of IRD, remembers having become distressed with the thickening of what he calls "the paper forest": the preparation of five-year plans, ten-year plans, and other lengthy documents needed to justify NASA's budget requests. The work he had been doing with airplanes and aerospace vehicles was interesting (he had just finished providing much of the flight instrumentation for the X-15 program), but not so interesting that he wanted to turn down Cliff Nelson's offer to join Lunar Orbiter. "The project was brand new and sounded much more exciting than what I had been doing," Taback remembers. It appealed to him also because of its high visibility both inside and outside the center. Everyone had to recognize the importance of a project directly related to the national goal of landing a man on the moon. 47
Norman L. Crabill, the head of LOPO's mission design team, also decided to join the project. On a Friday afternoon, he had received the word that one person from his branch of the Applied Materials and Physics Division would have to be named by the following Monday as a transfer to LOPO; as branch head, Crabill himself would have to make the choice. That weekend he asked himself, "What's your own future, Crabill? This is space. If you don't step up to this, what's your next chance. You've already decided not to go with the guys to Houston." He immediately knew who to transfer, "It was me." That was how he "got into the space business." And in his opinion, it was "the best thing" that he ever did.48
Cliff Nelson's office had the good sense to realize that monitoring the prime contractor did not entail doing Boeing's work for Boeing. Nelson approached the management of Lunar Orbiter more practically: the contractor was "to perform the work at hand while the field center retained responsibility for overseeing his progress and assuring that the job was done according to the terms of the contract." For Lunar Orbiter, this philosophy meant specifically that the project office would have to keep "a continuing watch on the progress of the various components, subsystems, and the whole spacecraft system during the different phases of designing, fabricating and testing them."49 Frequent meetings would take place between Nelson and his staff and their counterparts at Boeing to discuss all critical matters, but Langley would not assign all the jobs, solve all the problems, or micromanage every detail of the contractor's work.
This philosophy sat well with Robert J. Helberg, head of Boeing's Lunar Orbiter team. Helberg had recently finished directing the company's work on the Bomarc missile, making him a natural choice for manager of Boeing's next space venture. The Swedish-born Helberg was absolutely straightforward, and all his people respected him immensely -as would everyone in LOPO. He and fellow Swede Cliff Nelson got along famously. Their relaxed relationship set the tone for interaction between Langley and Boeing. Ideas and concerns passed freely back and forth between the project offices. Nelson and his people "never had to fear the contractor was just telling [them] a lie to make money," and Helberg and his tightly knit, 220-member Lunar Orbiter team never had to complain about uncaring, papershuffling bureaucrats who were mainly interested in dotting all the i's and crossing all the t's and making sure that nothing illegal was done that could bother government auditors and put their necks in a wringer.50
The Langley/NASA headquarters relationship was also harmonious and effective. This was in sharp contrast to the relationship between JPL and headquarters during the Surveyor project. Initially, JPL had tried to monitor the Surveyor contractor, Hughes, with only a small staff that provided little on-site technical direction; however, because of unclear objectives, the open-ended nature of the project (such basic things as which experiment packages would be included on the Surveyor spacecraft were uncertain), and a too highly diffused project organization within Hughes, JPL's "laissez-faire" approach to project management did not work. As the problems snowballed, Cortright found it necessary to intervene and compelled JPL to assign a regiment of on-site supervisors to watch over every detail of the work being done by Hughes. Thus, as one analyst of Surveyor's management has observed, "the responsibility for overall spacecraft development was gradually retrieved from Hughes by JPL, thereby altering significantly the respective roles of the field center and the spacecraft systems contractors."51
Nothing so unfortunate happened during Lunar Orbiter, partly because NASA had learned from the false steps and outright mistakes made in the management of Surveyor. For example, NASA now knew that before implementing a project, everyone involved must take part in extensive preliminary discussions. These conversations ensured that the project's goals were certain and each party's responsibilities clear. Each office should expect maximum cooperation and minimal unnecessary interference from the others. Before Lunar Orbiter was under way, this excellent groundwork had been laid.
As has been suggested by a 1972 study done by the National Academy of Public Administration, the Lunar Orbiter project can serve as a model of the ideal relationship between a prime contractor, a project office, a field center, a program office, and headquarters. From start to finish nearly everything important about the interrelationship worked out superbly in Lunar Orbiter. According to LOPO's Israel Taback, "Everyone worked together harmoniously as a team whether they were government, from headquarters or from Langley, or from Boeing." No one tried to take advantage of rank or to exert any undue authority because of an official title or organizational affiliation.52 That is not to say that problems never occurred in the management of Lunar Orbiter. In any large and complex technological project involving several parties, some conflicts are bound to arise. The key to project success lies in how differences are resolved.
The most fundamental issue in the premission planning for Lunar Orbiter was how the moon was to be photographed. Would the photography be "concentrated" on a predetermined single target, or would it be "distributed" over several selected targets across the moon's surface? On the answer to this basic question depended the successful integration of the entire mission plan for Lunar Orbiter.
For Lunar Orbiter, as with any other spaceflight program, mission planning involved the establishment of a complicated sequence of events: When should the spacecraft be launched? When does the launch window open and close? On what trajectory should the spacecraft arrive in lunar orbit? How long will it take the spacecraft to get to the moon? How and when should orbital "injection" take place? How and when should the spacecraft get to its target(s), and at what altitude above the lunar surface should it take the pictures? Where does the spacecraft need to be relative to the sun for taking optimal pictures of the lunar surface? Answering these questions also meant that NASA's mission planners had to define the lunar orbits, determine how accurately those orbits could be navigated, and know the fuel requirements. The complete mission profile had to be ready months before launch. And before the critical details of the profile could be made ready, NASA had to select the targeted areas on the lunar surface and decide how many of them were to be photographed during the flight of a single orbiter.53
Originally NASA's plan was to conduct a concentrated mission. The Lunar Orbiter would go up and target a single site of limited dimensions.
Top NASA officials listen to a LOPO briefing at Langley in December 1966. Sitting to the far right with his hand on his chin is Floyd Thompson. To the left sits Dr. George Mueller, NASA associate administrator for Manned Space Flight. On the wall is a diagram of the sites selected for the "concentrated mission. " The chart below illustrates the primary area of photographic interest.
The country's leading astrogeologists would help in the site selection by identifying the smoothest, most attractive possibilities for a manned lunar landing. The U.S. Geological Survey had drawn huge, detailed maps of the lunar surface from the best available telescopic observations. With these maps, NASA would select one site as the prime target for each of the five Lunar Orbiter missions. During a mission, the spacecraft would travel into orbit and move over the target at the "perilune," or lowest point in the orbit (approximately 50 kilometers [31.1 miles] above the surface); then it would start taking pictures. Successive orbits would be close together longitudinally, and the Lunar Orbiter's camera would resume photographing the surface each time it passed over the site. The high-resolution lens would take a 1-meter-resolution picture of a small area (4 x 16 kilometers) while at exactly the same time, the medium-resolution lens would take an 8-meter resolution picture of a wider area (32 x 37 kilometers). The high-resolution lens would photograph at such a rapid interval that the pictures would just barely overlap. The wide-angle pictures, taken by the medium-resolution lens, would have a conveniently wide overlap. All the camera exposures would take place in 24 hours, thus minimizing the threat to the film from a solar flare. The camera's capacity of roughly 200 photographic frames would be devoted to one location. The result would be one area shot in adjacent, overlapping strips. By putting the strips together, NASA had a picture of a central 1-meter-resolution area that was surrounded by a broader 8-meter resolution area -in other words, it would be one large, rich stereoscopic picture of a choice lunar landing site. NASA would learn much about that one ideal place, and the Apollo program would be well served.54
The plan sounded fine to everyone at least in the beginning. Langley's Request for Proposals had specified the concentrated mission, and Boeing had submitted the winning proposal based on that mission plan. Moreover, intensive, short-term photography like that called for in a concentrated mission was exactly what Eastman Kodak's high-resolution camera system had been designed for. The camera was a derivative of a spy satellite photo system created specifically for earth reconnaissance missions specified by the DOD.***
As LOPO's mission planners gave the plan more thought, however, they realized that the concentrated mission approach was flawed. Norman Crabill, Langley's head of mission integration for Lunar Orbiter, remembers the question he began to ask himself, "What happens if only one of these missions is going to work? This was in the era of Ranger failures and Surveyor slippage. When you shoot something, you had only a twenty percent probability that it was going to work. It was that bad." On that premise, NASA planned to fly five Lunar Orbiters, hoping that one would operate as it should. "Suppose we go up there and shoot all we [have] on one site, and it turns out to be no good?" fretted Crabill, and others began to worry as well. What if that site was not as smooth as it appeared on the U.S. Geological Survey maps, or a gravitational anomaly or orbital perturbation was present, making that particular area of the moon unsafe for a lunar lauding? And what if that Lunar Orbiter turned out to be the only one to work? What then?55
In late 1964, over the course of several weeks, LOPO became more convinced that it should not be putting all its eggs in one basket. "We developed the philosophy that we really didn't want to do the concentrated mission; what we really wanted to do was what we called the 'distributed mission,"' recalls Crabill. The advantage of the distributed mission was that it would enable NASA to inspect several choice targets in the Apollo landing zone with only one spacecraft.56
In early 1965, Norm Crabill and Tom Young of the LOPO mission integration team traveled to the office of the U.S. Geological Survey in Flagstaff, Arizona. There, the Langley engineers consulted with U.S. government astrogeologists John F. McCauley, Lawrence Rowan, and Harold Masursky. Jack McCauley was Flagstaff's top man at the time, but he assigned Larry Rowan, "a young and upcoming guy, very reasonable and very knowledgeable," the job of heading the Flagstaff review of the Lunar Orbiter site selection problem. "We sat down with Rowan at a table with these big lunar charts," and Rowan politely reminded the Langley duo that "the dark areas on the moon were the smoothest." Rowan then pointed to the darkest places across the entire face of the moon.57
Rowan identified 10 good targets. When Crabill and Young made orbital calculations, they became excited. In a few moments, they had realized that they wanted to do the distributed mission. Rowan and his colleagues in Flagstaff also became excited about the prospects. This was undoubtedly the way to catch as many landing sites as possible. The entire Apollo zone of interest was ±45° longitude and ±5° latitude, along the equatorial region of the facing, or near side of the moon. Within that zone, the area that could be photographed via a concentrated mission was small. A single Lunar Orbiter that could photograph 10 sites of that size all within that region would be much more effective. If the data showed that a site chosen by the astrogeologists was not suitable, NASA would have excellent photographic coverage of nine other prime sites. In summary, the distributed mode would....
.....give NASA the flexibility to ensure that Lunar Orbiter would provide the landing site information needed by Apollo even if only one Lunar Orbiter mission proved successful.
But there was one big hitch: Eastman Kodak's photo system was not designed for the distributed mission. It was designed for the concentrated mission in which all the photography would involve just one site and be loaded, shot, and developed in 24 hours. If Lunar Orbiter must photograph 10 sites, a mission would last at least two weeks. The film system was designed to sustain operations for only a day or two; if the mission lasted longer than that, the Bimat film would stick together, the exposed parts of it would dry out, the film would get stuck in the loops, and the photographic mission would be completely ruined.
When Boeing first heard that NASA had changed its mind and now wanted to do the distributed mission, Helberg and his men balked. According to LOPO's Norman Crabill, Boeing's representatives said, "Look, we understand you want to do this. But, wait. The system was designed, tested, used, and proven in the concentrated mission mode. You can't change it now because it wasn't designed to have the Bimat film in contact for long periods of time. In two weeks' time, some of the Bimat is just going to go, pfft! It's just going to fail!" Boeing understood the good sense of the distributed mission, but as the prime contractor, the company faced a classic technological dilemma. The customer, NASA, wanted to use the system to do something it was not designed to do. This could possibly cause a disastrous failure. Boeing had no recourse but to advise the customer that what it wanted to do could endanger the entire mission.58
The Langley engineers wanted to know whether Boeing could solve the film problem. "We don't know for sure," the Boeing staff replied, "and we don't have the time to find out." NASA suggested that Boeing conduct tests to obtain quantitative data that would define the limits of the film system. Boeing's response was "That's not in the contract."59 The legal documents specified that the Lunar Orbiter should have the capacity to conduct the concentrated mission. If NASA now wanted to change the requirements for developing the Orbiter, then a new contract would have to be negotiated. A stalemate resulted on this issue and lasted until early 1965. The first launch was only a year away.
If LOPO hoped to persuade Boeing to accept the idea of changing a basic mission requirement, it had to know the difference in reliability between the distributed and concentrated missions. If analysis showed that the distributed mission would be far less reliable, then even LOPO might want to reconsider and proceed with the concentrated mission. Crabill gave the job of obtaining this information to Tom Young, a young researcher from the Applied Materials and Physics Division. Crabill had specifically requested that Young be reassigned to LOPO mission integration because, in his opinion, Young was "the brightest guy [he] knew." On the day Young had reported to work with LOPO, Crabill had given him "a big pile of stuff to read," thinking he would be busy and, as Crabill puts it, "out of my hair for quite a while." But two days later, Young returned, having already made his way through all the material. When given the job of the comparative mission reliability analysis, Young went to Boeing in Seattle. In less than two weeks, he found what he needed to know and figured out the percentages: the reliability for the concentrated mission was an unspectacular 60 percent, but for the distributed mission it was only slightly worse, 58 percent. "It was an insignificant difference," Crabill thought when he heard Young's numbers, especially because nobody then really knew how to do that type of analysis. "We didn't gag on the fact that it was pretty low anyway, but we really wanted to do this distributed mission." The Langley researchers decided that the distributed mission was a sensible choice, if the Kodak system could be made to last for the extra time and if Boeing could be persuaded to go along with the mission change.60
LOPO hoped that Young's analysis would prove to Boeing that no essential difference in reliability existed between the two types of missions, but Boeing continued to insist that the concentrated mission was the legal requirement, not the distributed mission. The dispute was a classic case of implementing a project before even the customer was completely sure of what that project should accomplish. In such a situation, the only sensible thing to do was to be flexible.
The problem for Boeing, of course, was that such flexibility might cost the company its financial incentives. If a Lunar Orbiter mission failed, the company worried that it would not be paid the bonus money promised in the contract. Helberg and Nelson discussed this issue in private conversations. Floyd Thompson participated in many of these talks and even visited Seattle to try to facilitate an agreement. In the end, Langley convinced Helberg that the change from a concentrated to a distributed mission would not impact Boeing's incentives. If a mission failed because of the change, LOPO promised that it would assume the responsibility. Boeing would have done its best according to the government request and instructions -and for that they would not be penalized. 61
The missions, however, would not fail. NASA and Boeing would handle the technical problems involving the camera by testing the system to ascertain the definite limits of its reliable operation. From Kodak, the government and the prime contractor obtained hard data regarding the length of time the film could remain set in one place before the curls or bends in the film around the loops became permanent and the torque required to advance the film exceeded the capability of the motor. From these tests, Boeing and LOPO established a set of mission "rules" that had to be followed precisely. For example, to keep the system working, Lunar Orbiter mission controllers at JPL had to advance the film one frame every eight hours. The rules even required that film sometimes be advanced without opening the door of the camera lens. Mission controllers called these nonexposure shots their "film-set frames" and the schedule of photographs their "film budget."62
As a result of the film rules, the distributed mission turned out to be a much busier operation than a concentrated mission would have been. Each time a photograph was taken, including film-set frames, the spacecraft had to be maneuvered. Each maneuver required a command from mission control. LOPO staff worried about the ability of the spacecraft to execute so many maneuvers over such a prolonged period. They feared something would go wrong during a maneuver that would cause them to lose control of the spacecraft. Lunar Orbiter 1, however, flawlessly executed an astounding number of commands, and LOPO staff were able to control spacecraft attitude during all 374 maneuvers.63
Ultimately, the trust between Langley and Boeing allowed each to take the risk of changing to a distributed mission. Boeing trusted Langley to assume responsibility if the mission failed, and Langley trusted Boeing to put its best effort into making the revised plan a success. Had either not fulfilled its promise to the other, Lunar Orbiter would not have achieved its outstanding record.
Simple as this diagram of Lunar Orbiter (left) may look, no spacecraft in NASA history operated more successfully than Lunar Orbiter. Below, Lunar Orbiter goes I through a final inspection in the NASA Hanger S clean room at Kennedy Space Center prior to launch on 10 August 1966. The spacecraft was mounted on a three-axis test stand with its solar panels deployed and high-gain dish antenna extended from the side.
The switch to the distributed mission was not the only instance during the Lunar Orbiter mission when contract specifications were jettisoned to pursue a promising idea. Boeing engineers realized that the Lunar Orbiter project presented a unique opportunity for photographing the earth. When the LOPO staff heard this idea, they were all for it, but Helberg and Boeing management rejected the plan. Turning the spacecraft around so that its camera could catch a quick view of the earth tangential to the moon's surface entailed technical difficulties, including the danger that, once the spacecraft's orientation was changed, mission controllers could lose command of the spacecraft. Despite the risk, NASA urged Boeing to incorporate the maneuver in the mission plan for Lunar Orbiter 1. Helberg refused.64
In some projects, that might have been the end of the matter. People would have been forced to forget the idea and to live within the circumscribed world of what had been legally agreed upon. Langley, however, was not about to give up on this exciting opportunity. Cliff Nelson,....
.....Floyd Thompson, and Lee Scherer went to mission control at JPL to talk to Helberg and at last convinced him that he was being too cautious -that "the picture was worth the risk." If any mishap occurred with the spacecraft during the maneuver, NASA again promised that Boeing would still receive compensation and part of its incentive for taking the risk. The enthusiasm of his own staff for the undertaking also influenced Helberg in his final decision to take the picture. 65
On 23 August 1966 just ad Lunar Orbiter l was about to pass behind the moon, mission controllers executed the necessary maneuvers to point the camera away from the lunar surface and toward the earth. The result was the world's first view of the earth from space. It was called "the picture of the century'' and "the greatest shot taken since the invention of photography."****
Not even the color photos of the earth taken during the Apollo missions superseded the impact of this first image of our planet as a little island of life floating in the black and infinite sea of space. 66
Lunar Orbiter defied all the probability studies. All five missions worked extraordinarily well, and with the minor exception of a short delay in the launch of Lunar Orbiter I -the Eastman Kodak camera was not ready - all the missions were on schedule. The launches were three months apart with the first taking place in August 1966 and the last in August 1967. This virtually perfect flight record was a remarkable achievement, especially considering that Langley had never before managed any sort of flight program into deep space.
Lunar Orbiter accomplished what it was designed to do, and more. Its camera took 1654 photographs. More than half of these (840) were of the proposed Apollo landing sites. Lunar Orbiters I, II, and III took these site pictures from low-flight altitudes, thereby providing detailed coverage of 22 select areas along the equatorial region of the near side of the moon. One of the eight sites scrutinized by Lunar Orbiters II and III was a very smooth area in the Sea of Tranquility. A few years later, in July 1969, Apollo 11 commander Neil Armstrong would navigate the lunar module Eagle to a landing on this site.67
By the end of the third Lunar Orbiter mission, all the photographs needed to cover the Apollo landing sites had been taken. NASA was then free to redesign the last two missions, move away from the pressing engineering objective imposed by Apollo, and go on to explore other regions of the moon for the benefit of science. Eight hundred and eight of the remaining 814 pictures returned by Lunar Orbiters IV and V focused on the rest of the near side, the polar regions, and the mysterious far side of the moon. These were not the first photographs of the "dark side"; a Soviet space probe, Zond III, had taken pictures of it during a fly-by into a solar orbit a year earlier, in July 1965. But the Lunar Orbiter photos were higher quality than the Russian pictures and illuminated some lunarscapes that had never before been seen by the human eye. The six remaining photos were of the spectacular look back at the distant earth. By the time all the photos were taken, about 99 percent of the moon's surface had been covered.
When each Lunar Orbiter completed its photographic mission, the spacecraft continued its flight to gather clues to the nature of the lunar gravitational environment. NASA found these clues valuable in the planning of the Apollo flights. Telemetry data clearly indicated that the moon's gravitational pull was not uniform. The slight dips in the path of the Lunar Orbiters as they passed over certain areas of the moon's surface were caused by gravitational perturbations, which in turn were caused by the mascons.
The extended missions of the Lunar Orbiters also helped to confirm that radiation levels near the moon were quite low and posed no danger to astronauts unless a major solar flare occurred while they were exposed on the lunar surface. A few months after each Lunar Orbiter mission, NASA deliberately crashed the spacecraft into the lunar surface to study lunar impacts and their seismic consequences. Destroying the spacecraft before it deteriorated and mission controllers had lost command of it ensured that it would not wander into the path of some future mission.68
Whether the Apollo landings could have been made successfully without the photographs from Lunar Orbiter is a difficult question to answer. Without the photos, the manned landings could certainly still have been attempted. In addition to the photographic maps drawn from telescopic observation, engineers could use some good pictures taken from Ranger and Surveyor to guide them. However, the detailed photographic coverage of 22 possible landing sites definitely made NASA's final selection of ideal sites much easier and the pinpointing of landing spots possible.
Furthermore, Lunar Orbiter also contributed important photometric information that proved vital to the Apollo program. Photometry involves the science of measuring the intensity of light. Lunar Orbiter planners had to decide where to position the camera to have the best light for taking the high-resolution photographs. When we take pictures on earth, we normally want to have the sun behind us so it is shining directly on the target. But a photo taken of the lunar surface in these same circumstances produces a peculiar photometric function: the moon looks flat. Even minor topographical features are indistinguishable because of the intensity of the reflecting sunlight from the micrometeorite filled lunar surface. The engineers in LOPO had to determine the best position for photographing the moon. After studying the problem (Taback, Crabill, and Young led the attack on this problem), LOPO's answer was that the sun should indeed be behind the spacecraft, but photographs should be taken when the sun was only 15 degrees above the horizon. 69
Long before it was time for the first Apollo launch, LOPO's handling of the lunar photometric function was common knowledge throughout NASA and the aerospace industry. The BellComm scientists and engineers who reviewed Apollo planning quickly realized that astronauts approaching the moon to make a landing needed, like Lunar Orbiter, to be in the best position for viewing the moon's topography. Although a computer program would pinpoint the Apollo landing site, the computer's choice might not be suitable. If that was the case, astronauts would have to rely on their own eyes to choose a spot. If the sun was in the wrong position, they would not make out craters and boulders, the surface would appear deceptively flat, and the choice might be disastrous. Apollo 11 commander Neil Armstrong did not like the spot picked by the computer for the Eagle landing. Because NASA had planned for him to be in the best viewing position relative to the sun, Armstrong could see that the place was "littered with boulders the size of Volkswagons." So he flew on. He had to go another 1500 meters before he saw a spot where he could set the lunar module down safely.70
NASA might have considered the special photometric functions involved in viewing the moon during Apollo missions without Lunar Orbiter, but the experience of the Lunar Orbiter missions took the guesswork out of the calculations. NASA knew that its astronauts would be able to see what they needed to see to avoid surface hazards. This is a little-known but important contribution from Lunar Orbiter.
In the early 1970s Erasmus H. Kloman, a senior research associate with the National Academy of Public Administration, completed an extensive comparative investigation of NASA's handling of its Surveyor and Lunar Orbiter projects. After a lengthy review, NASA published a shortened and distilled version of Kloman's larger study as Unmanned Space Project Management: Surveyor and Lunar Orbiter. The result even in the expurgated version, with all names of responsible individuals left out -was a penetrating study in "sharp contrasts" that should be required reading for every project manager in business, industry, or government.
Based on his analysis of Surveyor and Lunar Orbiter, Kloman concluded that project management has no secrets of success. The key elements are enthusiasm for the project, a clear understanding of the project's objective, and supportive and flexible interpersonal and interoffice relationships. The history of Surveyor and Lunar Orbiter, Kloman wrote, "serves primarily as a confirmation of old truths about the so-called basic principles of management rather than a revelation of new ones." Kloman writes that Langley achieved Lunar Orbiter's objectives by "playing it by the book." By this, Kloman meant that Langley applied those simple precepts of good management; he did not mean that success was achieved through a thoughtless and strict formula for success. Kloman understood that Langley's project engineers broke many rules and often improvised as they went along. Enthusiasm, understanding, support, and flexibility allowed project staff to adapt the mission to new information, ideas, or circumstances. "Whereas the Surveyor lessons include many illustrations of how 'not to' set out on a project or how to correct for early misdirections," Kloman argued, "Lunar Orbiter shows how good sound precepts and directions from the beginning can keep a project on track."71
Lunar Orbiter, however, owes much of its success to Surveyor. LOPO staff were able to learn from the mistakes made in the Surveyor project. NASA headquarters was responsible for some of these mistakes. The complexity of Surveyor was underestimated, unrealistic manpower and financial ceilings were imposed, an "unreasonably open-ended combination of scientific experiments for the payload" was insisted upon for too long, too many changes in the scope and objectives of the project were made, and the project was tied to the unreliable Centaur launch vehicle.72 NASA headquarters corrected these mistakes. In addition, Langley representatives learned from JPL's mistakes and problems. They talked at great length to JPL staff in Pasadena about Surveyor both before and after accepting the responsibility for Lunar Orbiter. From these conversations, Langley acquired a great deal of knowledge about the design and management of an unmanned space mission. JPL scientists and engineers even conducted an informal "space school" that helped to educate several members of LOPO and Boeing's team about key details of space mission design and operations.
The interpersonal skills of the individuals responsible for Lunar Orbiter, however, appear to have been the essential key to success. These skills centered more on the ability to work with other people than they did on what one might presume to be the more critical and esoteric managerial, conceptual. and technical abilities. In Kloman's words, "individual personal qualities and management capabilities can at times be a determining influence in overall project performance."73 Compatibility among individual managers. Nelson and Helberg, and the ability of those managers to stimulate good working relationships between people proved a winning combination for Lunar Orbiter.
Norman Crabill made these comments about Lunar Orbiter's management: "We had some people who weren't afraid to use their own judgment instead of relying on rules. These people could think and find the essence of a problem, either by discovering the solution themselves or energizing the troops to come up with an alternative which would work. They were absolute naturals at that job."74
Lunar Orbiter was a pathfinder for Apollo, and it was an outstanding contribution by Langley Research Center to the early space program. The old NACA aeronautics laboratory proved not only that it could handle a major deep space mission, but also that it could achieve an extraordinary record of success that matched or surpassed anything yet tried by NASA. When the project ended and LOPO members went back into functional research divisions, Langley possessed a pool of experienced individuals who were ready, if the time came, to plan and manage yet another major...
.....project. That opportunity came quickly in the late 1960s with the inception of Viking, a much more complicated and challenging project designed to send unmanned reconnaissance orbiters and landing probes to Mars. When Viking was approved, NASA headquarters assigned the project to "those plumbers" at Langley. The old LOPO team formed the nucleus of Langley's much larger Viking Project Office. With this team, Langley would once again manage a project that would be virtually an unqualified success.
* Later in Apollo planning, engineers at the Manned Spacecraft Center in Houston thought that deployment of a penetrometer from the LEM during its final approach to landing would prove useful. The penetrometer would "sound" the anticipated target and thereby determine whether surface conditions were conducive to landing. Should surface conditions prove unsatisfactory, the LEM could be flown to another spot or the landing could be aborted. In the end, NASA deemed the experiment unnecessary. What the Surveyor missions found out about the nature of the lunar soil (that it resembled basalt and had the consistency of damp sand) made NASA so confident about the hardness of the surface that it decided this penetrometer experiment could be deleted. For more information, see Ivan D. Ertel and Roland W. Newkirk, The Apollo Spacecraft: A Chronology, vol. 4, NASA SP-4009 (Washington, 1978), p. 24
** Edgar Cortright and Oran Nicks would come to have more than a passing familiarity with the capabilities of Langley Research Center. In 1968, NASA would name Cortright to succeed Thompson as the center's director. Shortly thereafter, Cortright named Nicks as his deputy director. Both men then stayed at the center into the mid-1970s.
*** in the top-secret DOD system, the camera with the film inside apparently would reenter the atmosphere inside a heat-shielded package that parachuted down, was hooked, and was physically retrieved in midair (if all went as planned) by a specially equipped U.S. Air Force C-119 cargo airplane. It was obviously a very unsatisfactory system, but in the days before advanced electronic systems, it was the best high-resolution satellite reconnaissance system that modern technology could provide. Few NASA people were ever privy to many of the details of how the "black box" actually worked, because they did not have "the need to know." However, they figured that it had been designed, as one LOPO engineer has described in much oversimplified layman's terms, "so when a commander said, 'we've got the target', bop, take your snapshots, zap, zap, zap, get it down from orbit, retrieve it and bring it home, rush it off to Kodak, and get your pictures.", (Norman Crabill interview with author, Hampton, Va., 28 August 1991.)
**** The unprecedented photo also provided the first oblique perspectives of the lunar surface. All other photographs taken during the first mission were shot from a position perpendicular to the surface and thus, did not depict the moon in three dimensions. In subsequent missions, NASA made sure to include this sort of oblique photography. Following the first mission, Boeing prepared a booklet entitled Lunar Orbiter I -Photography (NASA Langley, 1965), which gave a detailed technical description of the earth-moon photographs; see especially pp. 64-71. | http://history.nasa.gov/SP-4308/ch10.htm | 13 |
14 | Land has always been agriculture's most valuable resource in addition to historically being this nation's cornerstone of individual wealth. How that land was to be used, who controlled its riches and what the structure of American agriculture would be were all issues that would help spark the American Revolution.
The prominent agricultural historian USDA Wayne D. Rasmussen reminded us that when we speak of such structures we are talking about the basic control and organization of resources needed for farm production. "Questions of farm structures have always related to the structure of the entire food and fiber system and, indeed to the total economic, social and political organization of the United States."
When the early English colonists first settled in this new land called America, for example, they soon faced the "quit-rent" system, a holdover from the centuries-old "land ownership" system in Europe.
Under such a system tenants could only get title to the land subject to a perpetual small fee paid to an absentee landlord who usually resided in England.
In addition to this "quit-rent" system, the new farmers also came to resent the British government's efforts after 1763 to forbid the establishment of settlements west of the Alleghenies and its efforts to control the marketing of products from that area by imposing unfair taxes on them.
It was these three efforts by the British to regulate the structure of American agriculture that became the basic causes for the American Revolution, a war that was led and fought mostly by planters and farmers.
Soon after independence was proclaimed and a federal constitution established, the fledgling government sought to encourage a land policy that would discourage sectionalism and provide an equal opportunity for all citizens to become land owners. By developing a township and range land survey system the Federal government sought to divide blocks of property equally so as to provide equal access to the land.
Already, however, feudal estates modeled after traditional Western European land holdings were in existence in the new land along with a number of various politically and religiously organized villages located principally in New England.
When the early settlers began to move inland at the end of the 18th century, small-scale farming started to dominate the agricultural scene. Such a trend began to alarm industrialists, bankers and the large plantation owners because it signaled the effort by the new settlers to distribute the country's real wealth -- its natural resources -- into as many hands as possible.
The consequences of this conflict would soon become apparent, for it was in this colonial period that the dual US agricultural economy that persists to this day first emerged. On one hand there was a traditional subsistence form of agriculture, while on the other hand was a system destined to become the forerunner of modern corporate agribusiness.
Geographer Ingolf Voegler characterizes the land-settling efforts by early pioneers as a desire to achieve an egalitarian land base. He reminds us that it was that revolutionary concept through which economic democracy was meant to sustain political democracy. This same idea would also soon come to inspire people from abroad by the millions to settle in this new, rich nation and/or attempt to emulate its ideals elsewhere in the world.
"For Jefferson and other eighteenth-century intellectuals, a nation of small farmers would provide political freedom, independence and self-reliance, and the ability to resist political oppression. In their minds, these goals were predicated on the right to own property, especially land. The right to land, the primary form of wealth in the eighteenth century, meant the right to a job and economic independence."
Jefferson reasoned that in a democracy, access to the land must be provided by the national government.
"Whenever there are in any country uncultivated lands and unemployed poor, it is clear that the laws of property have been so far extended as to violate a natural right. The earth is given as a common stock for man to labor and live on. If, for the encouragement of industry, we allow it to be appropriated, we must take care that other employment be provided to those excluded from the appropriation. If we do not, the fundamental right to labor the earth returns to the unemployed."
Unfortunately, the Ordinance of 1785 is an early illustration of how to this day we have basically failed to insure that the distribution of land in the United States shall be determined through a system that recognizes "equal justice under law."
This ordinance, which authorized the survey of all US-owned lands ahead of settlement into six-mile-square townships and square-mile sections of 640 acres each, was drafted by a committee chaired by Jefferson. Although the prevailing philosophy of agrarian democracy that motivated the ordinance was a dominant influence for 75 years, in practice the ideal was poorly served.
From the start, for example, Alexander Hamilton saw speculators and land companies, not individuals, becoming the principal buyers of public land. In turn, such entities would then be free to sell off that land to actual settlers for a profit. In fact, this is what did happen and it was not until the Homestead Act of 1862 that an attempt was made to change the practice.
Also stipulated in this Ordinance was the provision that one-half of the townships were to be sold as a whole and the other half in 640-acre sections. But, because such sales were by the auction system, it became relatively easy for land companies and speculators to gain large tracts of land at cheap prices and then resell them at excessive profits to actual settlers. Over 220 million acres would be bought and sold in this fashion.
Jefferson's belief that square grids were intrinsically democratic unfortunately led him to overlook the fact of existing geographical differences, which was a complicating factor from the start, one that quickly led to the packaging of land parcels and the advent of a lucrative and flourishing real estate business in America.
The government soon began selling land at prices much higher than its original purchase price, which discouraged actual settlers from purchasing it. The cost to the federal government, including interest, of the major and historic land purchases of the early 19th century was four and one-half cents per acre.
Much of this same land, under the provisions of the 1785 ordinance, was in turn sold to settlers for over one dollar per acre. Revenue realized from these sales, of course, was a key means by which the Federal government and Congress, composed at the time of many large landowners, generated revenues.
A newly adopted Federal Constitution and an ever-expanding market for American goods now began preparing the groundwork for later technological innovations that would encourage new economic and social developments in agriculture.
One has only to look at the composition of the delegates to the convention that drafted this Constitution to see that while future expansion of the national economy was important, protecting one's own immediate financial interests was absolutely vital to many of its drafters.
Of the 55 delegates to the Constitution Convention, 40 were holders of public securities, 14 were land speculators, 24 were moneylenders, 15 were slave owners and at least 11 were entrepreneurs. No one represented small farmers or artisans.
A.V. Krebs publishes the online newsletter The Agribusiness Examiner, which monitors corporate agribusiness from a public interest perspective. Email firstname.lastname@example.org. | http://www.populist.com/06.10.krebs.html | 13 |
21 | |History of Japan|
Bakumatsu (幕末 bakumatsu , "Late Tokugawa Shogunate", literally "end of the curtain") refers to the final years of the Edo period when the Tokugawa shogunate ended. Between 1853 and 1867 Japan ended its isolationist foreign policy known as sakoku and changed from a feudal shogunate to the Meiji government. The major ideological-political divide during this period was between the pro-imperial nationalists called ishin shishi and the shogunate forces, which included the elite shinsengumi swordsmen.
Although these two groups were the most visible powers, many other factions attempted to use the chaos of Bakumatsu to seize personal power.[page needed] Furthermore there were two other main driving forces for dissent: first, growing resentment on the part of the tozama daimyo (or outside lords), and second, growing anti-western sentiment following the arrival of Matthew C. Perry. The first related to those lords who had fought against Tokugawa forces at the Battle of Sekigahara in 1600 and had from that point on been excluded permanently from all powerful positions within the shogunate. The second was to be expressed in the phrase sonnō jōi, or "revere the Emperor, expel the barbarians". The turning point of the Bakumatsu was during the Boshin War and the Battle of Toba-Fushimi when pro-shogunate forces were defeated.[page needed]
Foreign frictions
Various frictions with foreign shipping led Japan to take defensive actions from the beginning of the 19th century. Western ships were increasing their presence around Japan due to whaling activities and the China trade, and were hoping for Japan to become a base for supply, or at least a place where shipwrecks could receive assistance. The violent demands made by the British frigate Phaeton in 1808 shocked many in Japan. In 1825, the Edict to expel foreigners at all cost (異国船無二念打払令 Ikokusen Muninen Uchiharei , "Don't think twice" policy) was issued by the Shogunate, prohibiting any contacts with foreigners, and remained in place until 1842.
Meanwhile Japan endeavoured to learn about foreign sciences through the process of Rangaku ("Western studies"). In order to reinforce Japan's capability to carry on the orders to repel Westerners, some such as the Nagasaki-based Takashima Shūhan managed to obtain weapons through the Dutch at Dejima, such as field guns, mortars and firearms. Various domains sent students to learn from Takashima in Nagasaki, from Satsuma Domain after the intrusion of an American warship in 1837 in Kagoshima Bay, and from Saga Domain and Chōshū Domain, all southern domains mostly exposed to Western intrusions. These domains also studied the manufacture of Western weapons, and by 1852 Satsuma and Saga had reverbatory furnaces to produce the iron necessary for firearms.
Following the Morrison Incident involving the Morrison under Charles W. King in 1837, Egawa Hidetatsu was put in charge of establishing the defense of Tokyo Bay against Western intrusions in 1839. After the victory of the British over the Chinese in the 1840 Opium War, many Japanese realized that traditional ways would not be sufficient to repel Western intrusions. In order to resist Western military forces, Western guns were studied and demonstrations made in 1841 by Takashima Shūhan to the Tokugawa Shogunate.
A national debate was already taking place about how to better avoid foreign domination. Some like Egawa claimed that it was necessary to use their own techniques to repel them. Others, such as Torii Yōzō argued that only traditional Japanese methods should be employed and reinforced. Egawa argued that just as Confucianism and Buddhism had been introduced from abroad, it made sense to introduce useful Western techniques. A theoretical synthesis of "Western knowledge" and "Eastern morality" would later be accomplished by Sakuma Shōzan and Yokoi Shōnan, in view of "controlling the barbarians with their own methods".
After 1839 however conservatives tended to prevail, and students of Western sciences were accused of treason (Bansha no goku) and were put under house arrest (Takashima Shūhan), forced to commit ritual suicide (Watanabe Kazan, Takano Chōei), or even assassinated as in the case of Sakuma Shōzan.
Commodore Perry (1853–54)
When Commodore Matthew C. Perry's four-ship squadron appeared in Edo Bay (Tokyo Bay) in July 1853, the bakufu (shogunate) was thrown into turmoil. Commodore Perry was fully prepared for hostilities if his negotiations with the Japanese failed, and threatened to open fire if the Japanese refused to negotiate. He gave them two white flags, telling them to hoist the flags when they wished a bombardment from his fleet to cease and to surrender. To demonstrate his weapons Perry ordered his ships to attack several buildings around the harbor. The ships of Perry were equipped with new Paixhans shell guns, capable of bringing destruction everywhere a shell landed.
Fortifications were established at Odaiba in Tokyo Bay in order to protect Edo from an American incursion. Industrial developments were also soon started in order to build modern cannons. A reverbatory furnace was established by Egawa Hidetatsu in Nirayama in order to cast cannons. Attempts were made at building Western-style warships such as the Shōhei Maru by using Dutch textbooks.
The American fleet returned in 1854. The chairman of the senior councillors, Abe Masahiro, was responsible for dealing with the Americans. Having no precedent to manage this threat to national security, Abe tried to balance the desires of the senior councillors to compromise with the foreigners, of the emperor who wanted to keep the foreigners out, and of the feudal daimyo rulers who wanted to go to war. Lacking consensus, Abe decided to compromise by accepting Perry's demands for opening Japan to foreign trade while also making military preparations. In March 1854, the Treaty of Peace and Amity (or Treaty of Kanagawa) maintained the prohibition on trade but opened the ports of Nagasaki, Shimoda and Hakodate to American whaling ships seeking provisions, guaranteed good treatment to shipwrecked American sailors, and allowed a United States consul to take up residence in Shimoda, a seaport on the Izu Peninsula, southwest of Edo. In February 1855, the Russians followed suit with the Treaty of Shimoda.
Political troubles and modernization
The resulting damage to the bakufu was significant. Debate over government policy was unusual and had engendered public criticism of the bakufu. In the hope of enlisting the support of new allies, Abe, to the consternation of the fudai, had consulted with the shinpan and tozama daimyo, further undermining the already weakened bakufu.
In the Ansei Reform (1854–1856), Abe then tried to strengthen the regime by ordering Dutch warships and armaments from the Netherlands and building new port defenses. In 1855, with Dutch assistance, the Shogunate acquired its first steam warship, Kankō Maru, which was used for training, and opened the Nagasaki Naval Training Center with Dutch instructors, and a Western-style military school was established at Edo. In 1857, it acquired its first screw-driven steam warship, the Kanrin Maru. Scientific knowledge was quickly expanded from the pre-existing foundation of Western knowledge, or "Rangaku".
Opposition to Abe increased within fudai circles, which opposed opening bakufu councils to tozama daimyo, and he was replaced in 1855 as chairman of the senior councilors by Hotta Masayoshi (1810–1864). At the head of the dissident faction was Tokugawa Nariaki, who had long embraced a militant loyalty to the emperor along with anti-foreign sentiments, and who had been put in charge of national defense in 1854. The Mito school—based on neo-Confucian and Shinto principles—had as its goal the restoration of the imperial institution, the turning back of the West.
The period saw a dramatic series of earthquakes, the Ansei Great Earthquakes, including the 1854 Ansei-Tōkai earthquake in December 1854, with the 1854 Ansei-Nankai earthquake the following day, and the 1855 Ansei Edo earthquake in November 1855. An earthquake and tsunami struck Shimoda on the Izu peninsula in the December 23 1854 Ansei-Tōkai earthquake, and because the port had just been designated as the prospective location for a US consulate, some construed the natural disasters as demonstration of the displeasure of the Gods.
Treaties of Amity and Commerce (1858)
Following the nomination of Townsend Harris as U.S. Consul in 1856 and two years of negotiation, the "Treaty of Amity and Commerce" was signed in 1858 and put into application from mid-1859. In a major diplomatic coup, Harris had abundantly pointed out the aggressive colonialism of France and Great Britain against China in the current Second Opium War (1856–1860), suggesting that these countries would not hesitate to go to war against Japan as well, and that the United States offered a peaceful alternative.
The most important points of the Treaty were:
- exchange of diplomatic agents.
- Edo, Kobe, Nagasaki, Niigata, and Yokohama’s opening to foreign trade as ports.
- ability of United States citizens to live and trade at will in those ports (only opium trade was prohibited).
- a system of extraterritoriality that provided for the subjugation of foreign residents to the laws of their own consular courts instead of the Japanese law system.
- fixed low import-export duties, subject to international control
- ability for Japan to purchase American shipping and weapons (three American steamships were delivered to Japan in 1862).
Japan was also forced to apply any further conditions granted to other foreign nations in the future to the United States, under the "most favoured nation" provision. Soon several foreign nations followed suit and obtained treaties with Japan (the Ansei Five-Power Treaties, with the United States (Harris Treaty) on July 29, 1858, Dutch (Treaty of Amity and Commerce between the Netherlands and Japan) on August 18, Russia (Treaty of Amity and Commerce between Russia and Japan) August 19, UK (Anglo-Japanese Treaty of Amity and Commerce) on August 26, and France (Treaty of Amity and Commerce between France and Japan) on October 9).
Trading houses were quickly set up in the open ports.
Crisis and conflict
Political crisis
Hotta lost the support of key daimyo, and when Tokugawa Nariaki opposed the new treaty, Hotta sought imperial sanction. The court officials, perceiving the weakness of the bakufu, rejected Hotta's request which resulted in the resignation of himself, and thus suddenly embroiled Kyoto and the emperor in Japan's internal politics for the first time in many centuries. When the shogun died without an heir, Nariaki appealed to the court for support of his own son, Tokugawa Yoshinobu (or Keiki), for shogun, a candidate favored by the shinpan and tozama daimyo. The fudai won the power struggle, however, installing Ii Naosuke, signing the Ansei Five-Power Treaties thus ending more than 200 years of seclusion without imperial grant (granted in 1865) and arresting Nariaki and Yoshinobu, executing Yoshida Shōin (1830–1859, a leading sonnō-jōi intellectual who had opposed the American treaty and plotted a revolution against the bakufu) known as Ansei Purge.
Attacks on foreigners and their supporters
From 1859, the ports of Nagasaki, Hakodate and Yokohama became open to foreign traders as a consequence of the Treaties. Foreigners arrived in Yokohama and Kanagawa in great numbers, giving rise to trouble with the samurai. Violence increased against the foreigners and those who dealt with them. Murders of foreigners and collaborating Japanese soon followed. On 26 August 1859, a Russian sailor was cut to pieces in the streets of Yokohama. In early 1860, two Dutch captains were slaughtered, also in Yokohama. Chinese and native servants of foreigners were also killed.
Japanese Prime Minister Ii Naosuke, who had signed the Harris Treaty and tried to eliminate opposition to Westernization with the Ansei Purge, was himself murdered in March 1860 in the Sakuradamon incident. A servant of the French Minister was attacked at the end of 1860. On 14 January 1861, Henry Heusken, Secretary to the American mission, was attacked and murdered. On 5 July 1861, a group of samurai attacked the British Legation, resulting in two deaths. During this period, about one foreigner was killed every month. In September 1862 occurred the Richardson Affair, which would force foreign nations to take decisive action in order to protect foreigners and guarantee the implementation of Treaty provisions. In May 1863, the US legation in Edo was torched.
|Japanese foreign trade
(1860–1865, in Mexican dollars)
|Exports||4.7 million||17 million|
|Imports||1.66 million||15 million|
The opening of Japan to uncontrolled foreign trade brought massive economic instability. While some entrepreneurs prospered, many others went bankrupt. Unemployment rose, as well as inflation. Coincidentally, major famines also increased the price of food drastically. Incidents occurred between brash foreigners, qualified as "the scum of the earth" by a contemporary diplomat, and the Japanese.
Japan's monetary system, based on Tokugawa coinage, also broke down. Traditionally, Japan's exchange rate between gold and silver was 1:5, whereas international rates were of the order of 1:15. This led to massive purchases of gold by foreigners, and ultimately forced the Japanese authorities to devalue their currency. There was a massive outflow of gold from Japan, as foreigners rushed to exchange their silver for "token" silver Japanese coinage and again exchange these against gold, giving a 200% profit to the transaction. In 1860, about 4 million ryōs thus left Japan, that is about 70 tons of gold. This effectively destroyed Japan's gold standard system, and forced it to return to weight-based system with International rates. The Bakufu instead responded to the crises by debasing the gold content of its coins by two thirds, so as to match foreign gold-silver exchange ratios.
During the 1860s, peasant uprisings (hyakushō ikki) and urban disturbances (uchikowashi) multiplied. "World renewal" movement appeared (yonaoshi ikki), as well as feverish hysteric movements such as the Eejanaika ("Why Not?").
Several missions were sent abroad by the Bakufu, in order to learn about Western civilization, revise unequal treaties, and delay the opening of cities and harbour to foreign trade. These efforts towards revision remained largely unsuccessful.
Imperial "Order to expel barbarians" (1863)
Belligerent opposition to Western influence further erupted into open conflict when the Emperor Kōmei, breaking with centuries of imperial tradition, began to take an active role in matters of state and issued, on March 11 and April 11, 1863, his "Order to expel barbarians" (攘夷実行の勅命 jōi jikkō no chokumei ).
The Shimonoseki-based Chōshū clan, under Lord Mori Takachika, followed on the Order, and began to take actions to expel all foreigners from the date fixed as a deadline (May 10, Lunar calendar). Openly defying the shogunate, Takachika ordered his forces to fire without warning on all foreign ships traversing Shimonoseki Strait.
Under pressure from the Emperor, the Shogun was also forced to issue a declaration promulgating the end of relations with foreigners. The order was forwarded to foreign legations by Ogasawara Zusho no Kami on June 24, 1863:
"The orders of the Tycoon, received from Kyoto, are to the effect that the ports are to be closed and the foreigners driven out, because the people of the country do not desire intercourse with foreign countries."—Missive of Ogasawara Dzusho no Kami, June 24, 1863, quoted in A Diplomat in Japan, Ernest Satow, p75
Lieutenant-Colonel Neale, head of the British legation, responded on very strong terms, equating the move with a declaration of war:
"It is, in fact, a declaration of war by Japan itself against the whole of the Treaty Powers, and the consequences of which, if not at once arrested, it will have to expiate by the severest and most merited chastisement"—Edward Neale, June 24, 1863. Quoted in A Diplomat in Japan, Ernest Satow, p77
A Second Japanese Embassy to Europe would be sent in December 1863, with the mission to obtain European support to reinstate Japan's former closure to foreign trade, and especially stop foreign access to the harbor of Yokohama. The Embassy ended in total failure as European powers did not see any advantages in yielding to its demands.
Military interventions against Sonnō Jōi (1863–1865)
American influence, so important in the beginning, waned after 1861 due to the advent of the American Civil War (1861–1865) that monopolized all available U.S. resources. This influence would be replaced by that of the British, the Dutch and the French.
The two ringleaders of the opposition to the Bakufu were from the provinces Satsuma (present day Kagoshima prefecture) and Chōshū (present-day Yamaguchi prefecture), two of the strongest tozama anti-shogunate domains in Edo period Japan. Satsuma military leaders Saigō Takamori and Okubo Toshimichi were brought together with Katsura Kogorō (Kido Takayoshi) of Chōshū. As they happened to be directly involved in the murder of Richardson for the former, and the attacks on foreign shipping in Shimonoseki for the latter, and as the Bakufu declared itself unable to placate them, Allied forces decided to mount direct military expeditions.
American intervention (July 1863)
In the morning of July 16, 1863, under sanction by Minister Pruyn, in an apparent swift response to the attack on the Pembroke, the U.S. frigate USS Wyoming under Captain McDougal himself sailed into the strait and single-handedly engaged the U.S.-built but poorly manned rebel fleet. For almost two hours before withdrawing, McDougal sank one enemy vessel and severely damaged the other two, along with some forty Japanese casualties, while the Wyoming suffered extensive damage with fourteen crew dead or wounded.
French intervention (August 1863)
On the heels of McDougal's engagement, two weeks later a French landing force of two warships, the Tancrède and the Dupleix, and 250 men under Captain Benjamin Jaurès swept into Shimonoseki and destroyed a small town, together with at least one artillery emplacement.
British bombardment of Kagoshima (August 1863)
In August 1863, the Bombardment of Kagoshima took place, in retaliation for the Namamugi incident and the murder of the English trader Richardson. The British Royal Navy bombarded the town of Kagoshima and destroyed several ships. Satsuma however later negotiated and paid 25,000 pounds, but did not remit Richardson's killers, and in exchange obtained an agreement by Great Britain to supply steam warships to Satsuma. The conflict actually became the starting point of a close relationship between Satsuma and Great Britain, which became major allies in the ensuing Boshin War. From the start, the Satsuma Province had generally been in favour of the opening and modernization of Japan. Although the Namamugi Incident was unfortunate, it was not characteristic of Satsuma's policy, and was rather abusively branded as an example of anti-foreign sonnō jōi sentiment, as a justification to a strong Western show of force.
Repression of the Mito rebellion (May 1864)
On 2 May 1864, another rebellion erupted against the power of the Shogunate, the Mito rebellion. This rebellion also was in the name of the Sonnō Jōi, the expulsion of the Western "barbarians" and the return to Imperial rule. The Shogunate managed to send an army to quell the revolt, which was ended in blood with the surrender of the rebels on 14 January 1865.
Chōshū rebellion
In the Kinmon Incident on 20 August 1864, troops from Chōshū Domain attempted to take control of Kyoto and the Imperial Palace in order to pursue the objective of Sonnō Jōi. This also led to a punitive expedition by the Tokugawa government, the First Chōshū expedition (長州征討).
Allied bombardment of Shimonoseki (September 1864)
Western nations planned an armed retaliation against armed Japanese opposition with the Bombardment of Shimonoseki. The Allied intervention occurred in September 1864, combining the naval forces of the United Kingdom, the Netherlands, France and the United States, against the powerful daimyo Mōri Takachika of the Chōshū Domain based in Shimonoseki, Japan. This conflict threatened to involve America, which in 1864, was already torn by it's own civil war.
Conservative reaffirmation
Following these successes against the imperial movement in Japan, the Shogunate was able to reassert a certain level of primacy at the end of 1864. The traditional policy of sankin kōtai was reinstated, and remnants of the rebellions of 1863–64 as well as the Shishi movement were brutally suppressed throughout the land.
The military interventions by foreign powers also proved that Japan was no military match against the West, and that expelling foreigners was not a realistic policy. The Sonnō Jōi movement thus lost its initial impetus. The structural weaknesses of the Bakufu however remained an issue, and the focus of opposition would then shift to creating a strong government under a single authority.
Twilight of the Bakufu
As the Bakufu proved incapable to pay the $3,000,000 indemnity demanded by foreign nations for the intervention at Shimonoseki, foreign nations agreed to reduce the amount in exchange for a ratification of the Harris Treaty by the Emperor, a lowering of customs tariffs to a uniform 5%, and the opening of the harbours of Hyōgo (modern Kōbe) and Osaka to foreign trade. In order to press their demands more forcefully, a squadron of four British, one Dutch and three French warships were sent to the harbour of Hyōgo in November 1865. Various incursions were made by foreign forces, until the Emperor finally agreed to change his total opposition to the Treaties, by formally allowing the Shogun to handle negotiations with foreign powers.
These conflicts led to the realization that head-on conflict with Western nations was not a solution for Japan. As the Bakufu continued its modernization efforts, Western daimyos (especially from Satsuma and Chōshū) also continued to modernize intensively in order to build a stronger Japan and to establish a more legitimate government under Imperial power.
Second Chōshū expedition (June 1866)
The Shogunate led a second punitive expedition against Chōshū from June 1866, but the Shogunate was actually defeated by the more modern and better organized troops of Chōshū. The new Shogun Tokugawa Yoshinobu managed to negotiate a ceasefire due to the death of the previous Shogun, but the prestige of the Shogunate was nevertheless seriously affected.
This reversal encouraged the Bakufu to take drastic steps towards modernization.
Renewal and modernization
During the last years of the bakufu, or bakumatsu, the bakufu took strong measures to try to reassert its dominance, although its involvement with modernization and foreign powers was to make it a target of anti-Western sentiment throughout the country.
Naval students were sent to study in Western naval schools for several years, starting a tradition of foreign-educated future leaders, such as Admiral Enomoto. The French naval engineer Léonce Verny was hired to build naval arsenals, such as Yokosuka and Nagasaki. By the end of the Tokugawa shogunate in 1868, the Japanese navy of the shogun already possessed eight western-style steam warships around the flagship Kaiyō Maru, which were used against pro-imperial forces during the Boshin war, under the command of Admiral Enomoto. A French Military Mission to Japan (1867) was established to help modernize the armies of the Bakufu. Japan sent a delegation to and participated in the 1867 World Fair in Paris.
Tokugawa Yoshinobu (also known as Keiki) reluctantly became head of the Tokugawa house and shogun following the unexpected death of Tokugawa Iemochi in mid-1866. In 1867, Emperor Kōmei died and was succeeded by his second son, Mutsuhito, as Emperor Meiji. Tokugawa Yoshinobu tried to reorganize the government under the Emperor while preserving the shogun's leadership role, a system known as kōbu gattai. Fearing the growing power of the Satsuma and Chōshū daimyo, other daimyo called for returning the shogun's political power to the emperor and a council of daimyo chaired by the former Tokugawa shogun. With the threat of an imminent Satsuma-Chōshū led military action, Keiki moved pre-emptively by surrendering some of his previous authority.
Boshin war
After Keiki had temporarily avoided the growing conflict, anti-shogunal forces instigated widespread turmoil in the streets of Edo using groups of rōnin. Satsuma and Chōshū forces then moved on Kyoto in force, pressuring the Imperial Court for a conclusive edict demolishing the shogunate. Following a conference of daimyo, the Imperial Court issued such an edict, removing the power of the shogunate in the dying days of 1867. The Satsuma, Chōshū, and other han leaders and radical courtiers, however, rebelled, seized the imperial palace, and announced their own restoration on January 3, 1868. Keiki nominally accepted the plan, retiring from the Imperial Court to Osaka at the same time as resigning as shogun. Fearing a feigned concession of the shogunal power to consolidate power, the dispute continued until culminating in a military confrontation between Tokugawa and allied domains with Satsuma, Tosa and Chōshū forces, in Fushimi and Toba. With the turning of the battle toward anti-shogunal forces, Keiki then quit Osaka for Edo, essentially ending both the power of the Tokugawa, and the shogunate that had ruled Japan for over 250 years.
Following the Boshin war (1868–1869), the bakufu was abolished, and Keiki was reduced to the ranks of the common daimyo. Resistance continued in the North throughout 1868, and the bakufu naval forces under Admiral Enomoto Takeaki continued to hold out for another six months in Hokkaidō, where they founded the short-lived Republic of Ezo. This defiance ended in May 1869 at the Battle of Hakodate, after one month of fighting.
See also
Prominent figures
- Ōmura Masujirō
- Sakamoto Ryoma
- Kondo Isami
- Hijikata Toshizo
- Takasugi Shinsaku
- Matsudaira Katamori
- Saigo Takamori
- Tokugawa Yoshinobu
- Yoshida Shoin
- Katsura Kogoro
- Nomura Motoni
- Matthew C. Perry
Less known figures of the time:
- Hayashi Daigaku no kami (Lord Rector, Confucianist)
- Ido Tsushima no kami (Governor of Yedo, former Gov. of Nagasaki)
- Izawa Mimasaka no kami (Gov. of Uraga, former Gov of Nagasaki)
- Kawakami Gensai (Greatest of 4 hitokiri, active in assassinations during this time period)
- Takano Chōei – Rangaku scholar
- Ernest Satow in Japan 1862–69
- Edward and Henry Schnell
- Robert Bruce Van Valkenburgh, American Minister-Resident
International relations
- Hillsborough, page # needed.
- Ravina, page # needed.
- Jansen 2002, p.287
- Kornicki, p.246
- Cullen, pp. 158–159.
- Jansen 1995, p. 124.
- Jansen 1995, pp. 126–130.
- Takekoshi, pp. 285–86
- Millis, p.88
- Walworth, p.21
- Hammer, p.65.
- Satow, p.33
- Satow, p.31
- Satow, p.34
- Jansen 1995, p.175.
- Dower p.2-1
- Metzler p.15
- Totman, pp. 140–147
- Satow, p.157.
- Jansen 1995, p.188
- Cullen, Louis M. (2003). A history of Japan 1582–1941: internal and external worlds. Cambridge University Press. ISBN 0-521-52918-2.
- Denney, John. (2011). Respect and Consideration: Britain in Japan 1853 - 1868 and beyond. Radiance Press. ISBN 978-0-9568798-0-6
- Dower, John W. (2008). Yokohama Boomtown: Foreigners in Treaty-Port Japan (1859–1872). Chapter Two, "Chaos". MIT. Visualizing Cultures.
- Hammer, Joshua. (2006).Yokohama Burning: the Deadly 1923 Earthquake and Fire that Helped Forge the Path to World War II. Simon and Schuster. ISBN 978-0-743-26465-5.
- Hillsborough, Romulus. (2005). Shinsengumi: The Shōgun's Last Samurai Corps. North Clarendon, Vermont: Tuttle Publishing. ISBN 0-8048-3627-2.
- Iida, Ken'ichi. (1980). "Origin and development of iron and steel technology in Japan". IDE-JETRO, UN University. Retrieved 16 April 2013.
- Jansen, Marius B. (1995). The Emergence of Meiji Japan. Cambridge University Press. ISBN 0-521-48405-7.
- Jansen, Marius B. (2002). The making of modern Japan. Harvard University Press. ISBN 0-674-00991-6.
- Kornicki, Peter F. (1998). Meiji Japan: Political, Economic and Social History 1868–1912. Taylor and Francis. ISBN 978-0-415-15618-9.
- Metzler, Mark. (2006). Lever of empire: the international gold standard and the crisis of liberalism in prewar Japan. University of California Press. ISBN 0-520-24420-6.
- Millis, Walter. (1981). [1st publ. 1956]. Arms and men: a study in American military history. Rutgers University Press. ISBN 978-0-8135-0931-0.
- Ravina, Mark. (2004). Last Samurai: The Life and Battles of Saigo Takamori. Hoboken, N.J.: John Wiley & Sons. ISBN 0-471-08970-2.
- Satow, Ernest. (2006). [1st publ. 1921]. A Diplomat in Japan. Stone Bridge Classics. ISBN 978-1-933330-16-7
- Takekoshi, Yosaburō. (2005). [1st publ. 1930]. The economic aspects of the history of the civilization of Japan. Vol. 3. Taylor & Francis. ISBN 978-0-415-32381-9.
- Walworth, Arthur. (2008). [1st publ. 1946]. Black Ships Off Japan – The Story of Commodore Perry's Expedition. Lightning Source Incorporated. ISBN 978-1-443-72850-8.
|Wikimedia Commons has media related to: The end of Edo shogunate|
- Languages and the Diplomatic Contacts in the Late Tokugawa Shogunate
- http://www.webkohbo.com/info3/bakumatu_menu/bakutop.html (Japanese) | http://www.digplanet.com/wiki/Bakumatsu | 13 |
17 | Occupation of the Ruhr
The Occupation of the Ruhr between 1923 and 1925, by troops from France and Belgium, was a response to the failure of the German Weimar Republic under Chancellor Wilhelm Cuno to pay reparations in the aftermath of World War I.
The Ruhr region was previously occupied by Allied troops in the immediate aftermath of the First World War, during the Occupation of the Rhineland (1918–1919). Under the terms of the Treaty of Versailles (1919), which formally ended the war, Germany admitted responsibility for starting the war and was obligated to pay war reparations to various of the Allies, principally France. The total sum of reparations demanded from Germany—around 226 billion gold marks (US $824 billion in 2013)—was decided by an Inter-Allied Reparations Commission. In 1921, the amount was reduced to 132 billion (at that time, $31.4 billion (US $442 billion in 2013), or £6.6 billion (UK £284 billion in 2013)). Even with the reduction, the debt was huge. As some of the payments were in industrial raw materials, German factories were unable to function, and the German economy suffered, further damaging the country's ability to pay.
By late 1922, the German defaults on payments had grown so regular that a crisis engulfed the Reparations Commission; the French and Belgian delegates urged occupying the Ruhr as a way of forcing Germany to pay more, while the British delegate urged a lowering of the payments. As a consequence of a German default on timber deliveries in December 1922, the Reparations Commission declared Germany in default, which led to the Franco-Belgian occupation of the Ruhr in January 1923. Particularly galling to the French was that the timber quota the Germans defaulted on was based on an assessment of their capacity the Germans made themselves and subsequently lowered. The Allies believed that the government of Chancellor Wilhelm Cuno had defaulted on the timber deliveries deliberately as a way of testing the will of the Allies to enforce the treaty. The entire conflict was further exacerbated by a German default on coal deliveries in early January 1923, which was the thirty-fourth coal default in the previous thirty-six months. The French Premier Raymond Poincaré was deeply reluctant to order the Ruhr occupation and took this step only after the British had rejected his proposals for non-military sanctions against Germany. Frustrated at Germany not paying reparations, Poincaré hoped for joint Anglo-French economic sanctions against Germany in 1922 and opposed military action. However by December 1922 he was faced with Anglo-American-German opposition and saw coal for French steel production and payments in money as laid out in the Treaty of Versailles draining away. Poincaré was exasperated with British opposition, and wrote to the French ambassador in London:
"Judging others by themselves, the English, who are blinded by their loyalty, have always thought that the Germans did not abide by their pledges inscribed in the Versailles Treaty because they had not frankly agreed to them. ... We, on the contrary, believe that if Germany, far from making the slightest effort to carry out the treaty of peace, has always tried to escape her obligations, it is because until now she has not been convinced of her defeat. ... We are also certain that Germany, as a nation, resigns herself to keep her pledged word only under the impact of necessity".
Poincaré decided to occupy the Ruhr on 11 January 1923 to extract the reparations himself. The real issue during the Ruhrkampf (Ruhr struggle), as the Germans labelled the battle against the French occupation, was not the German defaults on coal and timber deliveries but the sanctity of the Versailles treaty. Poincaré often argued to the British that letting the Germans defy Versailles in regards to the reparations would create a precedent that would lead to the Germans dismantling the rest of the Versailles treaty. Finally, Poincaré argued that once the chains that had bound Germany in Versailles were destroyed, it was inevitable that Germany would plunge the world into another world war.
Initiated by French Prime Minister Raymond Poincaré, the invasion took place on 11 January 1923. Some theories state that the French aimed to occupy the centre of German coal, iron, and steel production in the Ruhr area valley simply to get the money. Some others state that France did it to ensure that the reparations were paid in goods, because the Mark was practically worthless because of hyperinflation that already existed at the end of 1922. France had the iron ore and Germany had the coal. Each state wanted free access to the resource it was short of, as together these resources had far more value than separately. (Eventually this problem was resolved in the European Coal and Steel community.)
Passive resistance
The occupation was initially greeted by a campaign of passive resistance. Approximately 130 German civilians were killed by the French occupation army during the events. Some theories assert that to pay for "passive resistance" in the Ruhr, the German government began the hyper-inflation that destroyed the German economy in 1923. Others state that the road to hyperinflation was well established before with the reparation payments that started on November 1921. (see 1920s German inflation) In the face of economic collapse, with huge unemployment and hyperinflation, the strikes were eventually called off in September 1923 by the new Gustav Stresemann coalition government, which was followed by a state of emergency. Despite this, civil unrest grew into riots and coup attempts targeted at the government of the Weimar Republic, including the Beer Hall Putsch. The Rhenish Republic was proclaimed at Aachen (Aix-la-Chapelle) in October 1923.
Though the French did succeed in making their occupation of the Ruhr pay, the Germans through their "passive resistance" in the Ruhr and the hyperinflation that wrecked their economy, won the world's sympathy, and under heavy Anglo-American financial pressure (the simultaneous decline in the value of the franc made the French very open to pressure from Wall Street and the City), the French were forced to agree to the Dawes Plan of April 1924, which substantially lowered German reparations payments. Under the Dawes Plan, Germany paid only 1 billion marks in 1924, and then increasing amounts for the next three years, until the total rose to 2.25 billion marks by 1927.
Sympathy for Germany
Internationally the occupation did much to boost sympathy for Germany, although no action was taken in the League of Nations since it was legal under the Treaty of Versailles. The French, with their own economic problems, eventually accepted the Dawes Plan and withdrew from the occupied areas in July and August 1925. The last French troops evacuated Düsseldorf, Duisburg along with the city's important harbour in Duisburg-Ruhrort, ending French occupation of the Ruhr region on 25 August 1925. The occupation of the Ruhr "was profitable and caused neither the German hyperinflation, which began in 1922 and ballooned because of German responses to the Ruhr occupation, nor the franc's 1924 collapse, which arose from French financial practices and the evaporation of reparations". The profits, after Ruhr-Rhineland occupation costs, were nearly 900 million gold marks.
Hall argues that Poincaré was not a vindictive nationalist. Despite his disagreements with Britain, he desired to preserve the Anglo-French entente. When he ordered the French occupation of the Ruhr valley in 1923, his aims were moderate. He did not try to revive Rhenish separatism. His major goal was the winning of German compliance with the Versailles treaty. Though Poincaré's aims were moderate, his inflexible methods and authoritarian personality led to the failure of his diplomacy.
British perspective
When on 12 July 1922, Germany demanded a moratorium on reparation payments, tension developed between the French government of Raymond Poincaré and the Coalition government of David Lloyd George. The British Labour Party demanded peace and denounced Lloyd George as a troublemaker. It saw Germany as the martyr of the postwar period and France as vengeful and the principal threat to peace in Europe. The tension between France and Britain peaked during a conference in Paris in early 1923, by which time coalition led by Lloyd George had been replaced by the Conservatives. The Labour Party opposed the occupation of the Ruhr throughout 1923, which it rejected as French imperialism. The British Labour Party believed it had won when Poincaré accepted the Dawes Plan in 1924.
Dawes Plan
To deal with the implementation of the Dawes Plan, a conference took place in London in July–August 1924. The British Labour Prime Minister Ramsay MacDonald, who viewed reparations as impossible to pay successfully pressured the French Premier Édouard Herriot into a whole series of concessions to Germany. The British diplomat Sir Eric Phipps commented that “The London Conference was for the French 'man in the street' one long Calvary as he saw M. Herriot abandoning one by one the cherished possessions of French preponderance on the Reparations Commission, the right of sanctions in the event of German default, the economic occupation of the Ruhr, the French-Belgian railroad Régie, and finally, the military occupation of the Ruhr within a year”. The Dawes Plan was significant in European history as it marked the first time that Germany had succeeded in defying Versailles, and revised an aspect of the treaty in its favour.
The Saar region remained under French control until 1935.
German politics
In German politics, the crisis accelerated the formation of right-wing parties. Disoriented by the Defeat in the war, conservatives in 1922 founded a consortium of nationalist associations, the "Vereinigten Vaterländischen Verbände Deutschlands" (VVVD). The goal was to forge a united front of the right. In the climate of national resistance against the French Ruhr invasion, the VVVD reached its peak strength. It advocated policies of uncompromising monarchism, corporatism, anti-Semitism, and opposition to the Versailles settlement. However, it lacked internal unity and money, so it never managed to Unite the right and it faded away by the late 1920s, as the Nazis emerged.
See also
|Wikimedia Commons has media related to: Occupation of the Ruhr (1923)|
- Timothy W. Guinnane (January 2004). "Vergangenheitsbewältigung: the 1953 London Debt Agreement" (PDF). Center Discussion Paper no. 880. Economic Growth Center, Yale University. Retrieved 6 December 2008.
- The extent to which payment defaults were genuine or artificial is controversial. See World War I reparations and the sources cited therein.
- Marks, Sally "The Myths of Reparations" pages 231–255 from Central European History, Volume 11, Issue # 3, September 1978 pages 239–240.
- Marks, Sally "The Myths of Reparations" pages 231–255 from Central European History, Volume 11, Issue # 3, September 1978 pages 240–241.
- Marks, Sally "The Myths of Reparations" pages 231–255 from Central European History, Volume 11, Issue # 3, September 1978 page 240.
- Marks, Sally "The Myths of Reparations" pages 231–255 from Central European History, Volume 11, Issue # 3, September 1978 page 241.
- Marks, Sally "The Myths of Reparations" pages 231–255 from Central European History, Volume 11, Issue # 3, September 1978 page 244.
- Leopold Schwarzschild, World in Trance (London: Hamish Hamilton, 1943), p. 140.
- Marks, Sally "The Myths of Reparations" pages 231–255 from Central European History, Volume 11, Issue # 3, September 1978 page 245.
- Marks, Sally "The Myths of Reparations" pages 231–255 from Central European History, Volume 11, Issue # 3, September 1978 pages 244–245.
- Fischer p 28
- Fischer, p. 42
- Fischer, p 51
- Ferguson, Adam; When Money Dies: The Nightmare of Deficit Spending, Devaluation and Hyperinflation in Weimar Germany p. 38. ISBN 1-58648-994-1
- Marks, Sally "The Myths of Reparations" pages 231–255 from Central European History, Volume 11, Issue # 3, September 1978 pages 245–246.
- Marks, Sally "The Myths of Reparations" pages 231–255 from Central European History, Volume 11, Issue # 3, September 1978 pages 247.
- Walsh, p. 142
- Sally Marks, '1918 and After. The Postwar Era', in Gordon Martel (ed.), The Origins of the Second World War Reconsidered. Second Edition (London: Routledge, 1999), p. 26.
- Marks, p. 35, n. 57.
- Hines H. Hall, III, "Poincare and Interwar Foreign Policy: 'L'Oublie de la Diplomatie' in Anglo-French Relations, 1922-1924," Proceedings of the Western Society for French History (1982), Vol. 10, pp 485-494.
- Aude Dupré de Boulois, "Les Travaillistes, la France et la Question Allemande (1922-1924)," Revue d'Histoire Diplomatique (1999) 113#1 pp 75-100.
- Marks, "The Myths of Reparations" page 248.
- Marks, "The Myths of Reparations" page 249.
- James M. Diehl, "Von Der 'Vaterlandspartei' zur 'Nationalen Revolution': Die 'Vereinigten Vaterländischen Verbände Deutschlands (VVVD)' 1922-1932," [From "party for the fatherland" to "national revolution": the United Fatherland Associations of Germany (VVVD), 1922-32] Vierteljahrshefte für Zeitgeschichte (1985) 333#4 pp 617-639.
- Fischer, Conan. The Ruhr Crisis, 1923–1924 (Oxford U.P., 2003); online review
- Marks, Sally. "The Myths of Reparations" pages Central European History, Volume 11, Issue # 3, September 1978 231–255
- O'Riordan, Elspeth. "British Policy and the Ruhr Crisis 1922-24," Diplomacy & Statecraft (2004) 15#2 pp 221-251.
- O'Riordan, Elspeth. Britain and the Ruhr crisis (London, 2001);
- Walsh, Ben. GCSE modern world history;
French and German
- Stanislas Jeannesson, Poincaré, la France et la Ruhr 1922–1924. Histoire d'une occupation (Strasbourg, 1998);
- Michael Ruck, Die Freien Gewerkschaften im Ruhrkampf 1923 (Frankfurt am Main, 1886);
- Barbara Müller, Passiver Widerstand im Ruhrkampf. Eine Fallstudie zur gewaltlosen zwischenstaatlichen Konfliktaustragung und ihren Erfolgsbedingungen (Münster, 1995);
- Gerd Krüger, Das "Unternehmen Wesel" im Ruhrkampf von 1923. Rekonstruktion eines misslungenen Anschlags auf den Frieden, in Horst Schroeder, Gerd Krüger, Realschule und Ruhrkampf. Beiträge zur Stadtgeschichte des 19. und 20. Jahrhunderts (Wesel, 2002), pp. 90–150 (Studien und Quellen zur Geschichte von Wesel, 24) [esp. on the background of so-called 'active' resistance];
- Gerd Krumeich, Joachim Schröder (eds.), Der Schatten des Weltkriegs: Die Ruhrbesetzung 1923 (Essen, 2004) (Düsseldorfer Schriften zur Neueren Landesgeschichte und zur Geschichte Nordrhein-Westfalens, 69);
- Gerd Krüger, "Aktiver" und passiver Widerstand im Ruhrkampf 1923, in Günther Kronenbitter, Markus Pöhlmann, Dierk Walter (eds.), Besatzung. Funktion und Gestalt militärischer Fremdherrschaft von der Antike bis zum 20. Jahrhundert (Paderborn / Munich / Vienna / Zurich, 2006), pp. 119–30 (Krieg in der Geschichte, 28); | http://en.wikipedia.org/wiki/Occupation_of_the_Ruhr | 13 |
64 | The Han Dynasty (206 BCE – 220 CE), founded by the peasant rebel leader Liu Bang (known posthumously as Emperor Gaozu),[note 1] was the second imperial dynasty of China. It followed the Qin Dynasty (221–206 BCE), which had unified the Warring States of China by conquest. Interrupted briefly by the Xin Dynasty (9–23 CE) of Wang Mang, the Han Dynasty is divided into two periods: the Western Han (206 BCE – 9 CE) and the Eastern Han (25–220 CE). These appellations are derived from the locations of the capital cities Chang'an and Luoyang, respectively. The third and final capital of the dynasty was Xuchang, where the court moved in 196 CE during a period of political turmoil and civil war.
The Han Dynasty ruled in an era of Chinese cultural consolidation, political experimentation, relative economic prosperity and maturity, and great technological advances. There was unprecedented territorial expansion and exploration initiated by struggles with non-Chinese peoples, especially the nomadic Xiongnu of the Eurasian Steppe. The Han emperors were initially forced to acknowledge the rival Xiongnu Chanyus as their equals, yet in reality the Han was an inferior partner in a tributary and royal marriage alliance known as heqin. This agreement was broken when Emperor Wu of Han (r. 141–87 BCE) launched a series of military campaigns which eventually caused the fissure of the Xiongnu Federation and redefined the borders of China. The Han realm was expanded into the Hexi Corridor of modern Gansu province, the Tarim Basin of modern Xinjiang, modern Yunnan and Hainan, modern northern Vietnam, modern North Korea, and southern Outer Mongolia. The Han court established trade and tributary relations with rulers as far west as the Arsacids, to whose court at Ctesiphon in Mesopotamia the Han monarchs sent envoys. Buddhism first entered China during the Han, spread by missionaries from Parthia and the Kushan Empire of northern India and Central Asia.
From its beginning, the Han imperial court was threatened by plots of treason and revolt from its subordinate kingdoms, the latter eventually ruled only by royal Liu family members. Initially, the eastern half of the empire was indirectly administered through large semi-autonomous kingdoms which pledged loyalty and a portion of their tax revenues to the Han emperors, who ruled directly over the western half of the empire from Chang'an. Gradual measures were introduced by the imperial court to reduce the size and power of these kingdoms, until a reform of the middle 2nd century BCE abolished their semi-autonomous rule and staffed the kings' courts with central government officials. Yet much more volatile and consequential for the dynasty was the growing power of both consort clans (of the empress) and the eunuchs of the palace. In 92 CE, the eunuchs entrenched themselves for the first time in the issue of the emperors' succession, causing a series of political crises which culminated in 189 CE with their downfall and slaughter in the palaces of Luoyang. This event triggered an age of civil war as the country became divided by regional warlords vying for power. Finally, in 220 CE, the son of an imperial chancellor and king accepted the abdication of the last Han emperor, who was deemed to have lost the Mandate of Heaven according to Dong Zhongshu's (179–104 BCE) cosmological system that intertwined the fate of the imperial government with Heaven and the natural world. Following the Han, China was split into three states: Cao Wei, Shu Han, and Eastern Wu; these were reconsolidated into one empire by the Jin Dynasty (265–420 CE).
Fall of Qin and Chu-Han contention
Collapse of Qin
The Zhou Dynasty (c. 1050–256 BCE) had established the State of Qin in Western China as an outpost to breed horses and act as a defensive buffer against nomadic armies of the Rong, Qiang, and Di peoples. After conquering six Warring States (i.e. Han, Zhao, Wei, Chu, Yan, and Qi) by 221 BCE, the King of Qin, Ying Zheng, unified China under one empire divided into 36 centrally-controlled commanderies. With control over much of China proper, he affirmed his enhanced prestige by taking the unprecedented title huangdi (皇帝), or 'emperor', known thereafter as Qin Shi Huang (i.e. the first emperor of Qin). Han-era historians would accuse his regime of employing ruthless methods to preserve his rule.
Qin Shi Huang died of natural causes in 210 BCE. In 209 BCE the conscription officers Chen Sheng and Wu Guang, leading 900 conscripts through the rain, failed to meet an arrival deadline; the Standard Histories claim that the Qin punishment for this delay would have been execution. To avoid this, Chen and Wu started a rebellion against Qin, known as the Daze Village Uprising, but they were thwarted by the Qin general Zhang Han in 208 BCE; both Wu and Chen were subsequently assassinated by their own soldiers. Yet by this point others had rebelled, among them Xiang Yu (d. 202 BCE) and his uncle Xiang Liang (項梁/项梁), men from a leading family of the Chu aristocracy. They were joined by Liu Bang, a man of peasant origin and supervisor of convicts in Pei County. Mi Xin, grandson of King Huai I of Chu, was declared King Huai II of Chu at his powerbase of Pengcheng (modern Xuzhou) with the support of the Xiangs, while other kingdoms soon formed in opposition to Qin. Despite this, in 208 BCE Xiang Liang was killed in a battle with Zhang Han, who subsequently attacked Zhao Xie the King of Zhao at his capital of Handan, forcing him to flee to Julu, which Zhang put under siege. However, the new kingdoms of Chu, Yan, and Qi came to Zhao's aid; Xiang Yu defeated Zhang at Julu and in 207 BCE forced Zhang to surrender.
While Xiang was occupied at Julu, King Huai II sent Liu Bang to capture the Qin heartland of Guanzhong with an agreement that the first officer to capture this region would become its king. In late 207 BCE, the Qin ruler Ziying, who had claimed the reduced title of King of Qin, had his chief eunuch Zhao Gao killed after Zhao had orchestrated the deaths of Chancellor Li Si in 208 BCE and the second Qin emperor Qin Er Shi in 207 BCE. Liu Bang gained Ziying's submission and secured the Qin capital of Xianyang; persuaded by his chief advisor Zhang Liang (d. 189 BCE) not to let his soldiers loot the city, he instead sealed up its treasury.
Contention with Chu
The Standard Histories allege that when Xiang Yu arrived at Xianyang two months later in early 206 BCE, he looted it, burned it to the ground, and had Ziying executed. In that year, Xiang Yu offered King Huai II the title of Emperor Yi of Chu and sent him to a remote frontier where he was assassinated; Xiang Yu then assumed the title Hegemon-King of Western Chu (西楚霸王) and became the leader of a confederacy of 18 kingdoms. At the Feast at Hong Gate, Xiang Yu considered having Liu Bang assassinated, but Liu, realizing that Xiang was considering killing him, escaped during the middle of the feast. In a slight towards Liu Bang, Xiang Yu carved Guanzhong into three kingdoms with former Qin general Zhang Han and two of his subordinates as kings; Liu Bang was granted the frontier Kingdom of Han in Hanzhong, where he would pose less of a political challenge to Xiang Yu.
In the summer of 206 BCE, Liu Bang heard of Emperor Yi's fate and decided to rally some of the new kingdoms to oppose Xiang Yu, leading to a four-year war known as the Chu–Han contention. Liu initially made a direct assault against Pengcheng and captured it while Xiang was battling another king who resisted him—Tian Guang (田廣) the King of Qi—but his forces collapsed upon Xiang's return to Pengcheng; he was saved by a storm which delayed the arrival of Chu's troops, although his father Liu Zhijia (劉執嘉) and wife Lü Zhi were captured by Chu forces. Liu barely escaped another defeat at Xingyang, but Xiang Yu was unable to pursue him because Liu Bang induced Ying Bu (英布), the King of Huainan, to rebel against Xiang. After Liu Bang occupied Chenggao along with a large Qin grain storage, Xiang threatened to kill Liu's father if he did not surrender, but Liu did not give in to Xiang's threats.
With Chenggao and his food supplies lost, and with Liu Bang's general Han Xin (d. 196 BCE) having conquered Zhao and Qin to Chu's north, in 203 BCE Xiang Yu offered to release Liu Bang's relatives from captivity and split China into political halves: the west would belong to Han and the east to Chu. Although Liu accepted the truce, it was short-lived, and in 202 BCE at Gaixia in modern Anhui, the Han forces forced Xiang Yu to flee from his fortified camp in the early morning with only 800 cavalry, pursued by 5,000 Han cavalry. After several bouts of fighting, Xiang Yu became surrounded at the banks of the Yangzi River, where he committed suicide. Liu Bang took the title of emperor, and is known to posterity as Emperor Gaozu of Han (r. 202–195 BCE).
Reign of Gaozu
Consolidation, precedents, and rivals
Emperor Gaozu initially made Luoyang his capital, but then moved it to Chang'an (near modern Xi'an, Shaanxi) due to concerns over natural defences and better access to supply routes. Following Qin precedent, Emperor Gaozu adopted the administrative model of a tripartite cabinet (formed by the Three Excellencies) along with nine subordinate ministries (headed by the Nine Ministers). Despite Han statesmen's general condemnation of Qin's harsh methods and Legalist philosophy, the first Han law code compiled by Chancellor Xiao He in 200 BCE seems to have borrowed much from the structure and substance of the Qin code (excavated texts from Shuihudi and Zhangjiashan in modern times have reinforced this suspicion).
From Chang'an, Gaozu ruled directly over 13 commanderies (increased to 16 by his death) in the western portion of the empire. In the eastern portion, he established 10 semi-autonomous kingdoms (Yan, Dai, Zhao, Qi, Liang, Chu, Huai, Wu, Nan, and Changsha) that he bestowed to his most prominent followers to placate them. Due to alleged acts of rebellion and even alliances with the Xiongnu—a northern nomadic people—by 196 BCE Gaozu had replaced nine of them with members of the royal family.
According to Michael Loewe, the administration of each kingdom was "a small-scale replica of the central government, with its chancellor, royal counsellor, and other functionaries." The kingdoms were to transmit census information and a portion of their taxes to the central government. Although they were responsible for maintaining an armed force, kings were not authorized to mobilize troops without explicit permission from the capital.
Wu Rui (吳芮), King of Changsha, was the only remaining king not of the Liu clan. When Wu Rui's great-grandson Wu Zhu (吳著) or Wu Chan (吳產) died heirless in 157 BCE, Changsha was transformed into an imperial commandery and later a Liu family principality. South of Changsha, Gaozu sent Lu Jia (陸賈) as ambassador to the court of Zhao Tuo to acknowledge the latter's sovereignty over Nanyue (Vietnamese: Triệu Dynasty; in modern Southwest China and northern Vietnam).
Xiongnu and Heqin
The Qin general Meng Tian had forced Toumen, the Chanyu of the Xiongnu, out of the Ordos Desert in 215 BCE, but Toumen's son and successor Modu Chanyu built the Xiongnu into a powerful empire by subjugating many other tribes. By the time of Modu's death in 174 BCE, the Xiongnu domains stretched from what is now Manchuria and Mongolia to the Altai and Tian Shan mountain ranges in Central Asia. The Chinese feared incursions by the Xiongnu under the guise of trade and were concerned that Han-manufactured iron weapons would fall into Xiongnu hands. Gaozu thus enacted a trade embargo against the Xiongnu. To compensate the Chinese border merchants of the northern kingdoms of Dai and Yan for lost trade, he made them government officials with handsome salaries. Outraged by this embargo, Modu Chanyu planned to attack Han. When the Xiongnu invaded Taiyuan in 200 BCE and were aided by the defector King Xin of Hán (韓/韩, not to be confused with the ruling Hàn 漢 dynasty, or the general Han Xin), Gaozu personally led his forces through the snow to Pingcheng (near modern Datong, Shanxi). In the ensuing Battle of Baideng, Gaozu's forces were heavily surrounded for seven days; running short of supplies, he was forced to flee.
After this defeat, the court adviser Liu Jing (劉敬, originally named Lou Jing [婁敬]) convinced the emperor to create a peace treaty and marriage alliance with the Xiongnu Chanyu called the heqin agreement. By this arrangement established in 198 BCE, the Han hoped to modify the Xiongnu's nomadic values with Han luxury goods given as tribute (silks, wine, foodstuffs, etc.) and to make Modu's half-Chinese successor a subordinate to grandfather Gaozu. The exact amounts of annual tribute as promised by Emperor Gaozu given to the Xiongnu in the 2nd century BCE shortly after the defeat are unknown. In 89 BCE, however, Hulugu Chanyu (狐鹿姑) (r. 95–85 BCE) requested a renewal of the heqin agreement with the increased amount of annual tribute at 400,000 L (11,350 U.S. bu) of wine, 100,000 L (2,840 U.S. bu) of grain, and 10,000 bales of silk; thus previous amounts would have been less than these figures.
Although the treaty acknowledged both huangdi and chanyu as equals, Han was in fact the inferior partner since it was forced to pay tribute to appease the militarily powerful Xiongnu. Emperor Gaozu was initially set to give his only daughter to Modu, but under the opposition of Empress Lü, Emperor Gaozu made a female relative princess and married her to Modu. Until the 130s BCE, the offering of princess brides and tributary items scarcely satisfied the Xiongnu, who often raided Han's northern frontiers and violated the 162 BCE treaty that established the Great Wall as the border between Han and Xiongnu.
Empress Dowager Lü's rule
Emperor Hui
When Ying Bu rebelled in 195 BCE, Emperor Gaozu personally led the troops against Ying and received an arrow wound which allegedly led to his death the following year. His heir apparent Liu Ying took the throne and is posthumously known as Emperor Hui of Han (r. 195–188 BCE). Shortly afterwards Gaozu's widow Lü Zhi, now empress dowager, had Liu Ruyi, a potential claimant to the throne, poisoned and his mother, the Consort Qi, brutally mutilated. When the teenage Emperor Hui discovered the cruel acts committed by his mother, Loewe says that he "did not dare disobey her."
Hui's brief reign saw the completion of the defensive city walls around the capital Chang'an in 190 BCE; these brick and rammed earth walls were originally 12 m (40 ft) tall and formed a rough rectangular ground plan (with some irregularities due to topography); their ruins still stand today. This urban construction project was completed by 150,000 conscript laborers. Emperor Hui's reign saw the repeal of old Qin laws banning certain types of literature and was characterized by a cautious approach to foreign policy, including the renewal of the heqin agreement with the Xiongnu and Han's acknowledgment of the independent sovereignty of the Kings of Donghai and Nanyue.
Regency and downfall of the Lü clan
Since Emperor Hui did not sire any children with his empress Zhang Yan, after his death in 188 BCE, Lü Zhi, now grand empress dowager and regent, chose his successor from among his sons with other consorts. She first placed Emperor Qianshao of Han (r. 188–184 BCE) on the throne, but then removed him for another puppet ruler Emperor Houshao of Han (r. 184–180 BCE). She not only issued imperial edicts during their reigns, but she also appointed members of her own clan as kings against Emperor Gaozu's explicit prohibition; other clan members became key military officers and civil officials.
The court under Lü Zhi was not only unable to deal with a Xiongnu invasion of Longxi Commandery (in modern Gansu) in which 2,000 Han prisoners were taken, but it also provoked a conflict with Zhao Tuo, King of Nanyue, by imposing a ban on exporting iron and other trade items to his southern kingdom. Proclaiming himself Emperor Wu of Nanyue (南越武帝) in 183 BCE, Zhao Tuo attacked the Han Kingdom of Changsha in 181 BCE. He did not rescind his rival imperial title until the Han ambassador Lu Jia again visited Nanyue's court during the reign of Emperor Wen.
After Empress Dowager Lü's death in 180 BCE, it was alleged that the Lü clan plotted to overthrow the Liu dynasty, and Liu Xiang the King of Qi (Emperor Gaozu's grandson) rose against the Lüs. Before the central government and Qi forces engaged each other, the Lü clan was ousted from power and destroyed by a coup led by the officials Chen Ping and Zhou Bo (周勃) at Chang'an. Although Liu Xiang had resisted the Lüs, he was passed over to become emperor because he had mobilized troops without permission from the central government and because his mother 's family possessed the same ambitious attitude as the Lüs. Consort Bo, the mother of Liu Heng, King of Dai, was considered to possess a noble character, so her son was chosen as successor to the throne; he is known posthumously as Emperor Wen of Han (r. 180–157 BCE).
Reign of Wen and Jing
Reforms and policies
During the "Rule of Wen and Jing" (the era named after Emperor Wen and his successor Emperor Jing (r. 157–141 BCE), the Han Empire witnessed greater economic and dynastic stability, while the central government assumed more power over the realm. In an attempt to distance itself from the harsh rule of Qin, the court under these rulers abolished legal punishments involving mutilation in 167 BCE, declared eight widespread amnesties between 180–141 BCE, and reduced the tax rate on households' agricultural produce from one-fifteenth to one-thirtieth in 168 BCE. It was abolished altogether the following year, but reinstated at the rate of one-thirtieth in 156 BCE.
Government policies were influenced by the proto-Daoist Huang-Lao (黃老) ideology, a mix of political and cosmological precepts given patronage by Wen's wife Empress Dou (d. 135 BCE), who was empress dowager during Jing's reign and grand empress dowager during the early reign of his successor Emperor Wu (r. 141–87 BCE). Huang-Lao, named after the mythical Yellow Emperor and the 6th-century-BCE philosopher Laozi, viewed the former as the founder of ordered civilization; this was unlike the Confucians, who gave that role to legendary sage kings Yao and Shun. Han imperial patrons of Huang-Lao sponsored the policy of "nonaction" or wuwei (無為) (a central concept of Laozi's Daodejing), which claimed that rulers should interfere as little as possible if administrative and legal systems were to function smoothly. The influence of Huang-Lao doctrines on state affairs became eclipsed with the formal adoption of Confucianism as state ideology during Wu's reign and the later view that Laozi, not the Yellow Emperor, was the originator of Daoist practices.
From 179–143 BCE, the number of kingdoms was increased from eleven to twenty-five and the number of commanderies from nineteen to forty. This was not due to a large territorial expansion, but because kingdoms that had rebelled against Han rule or failed to produce an heir were significantly reduced in size or even abolished and carved into new commanderies or smaller kingdoms.
Rebellion of Seven States
When Liu Xian (劉賢), the heir apparent of Wu, once made an official visit to the capital during Wen's reign, he played a board game called liubo with then crown prince Liu Qi, the future Emperor Jing. During a heated dispute, Liu Qi threw the game board at Liu Xian, killing him. This outraged his father Liu Pi (劉濞), the King of Wu and a nephew of Emperor Gaozu's, who was nonetheless obliged to claim allegiance to Liu Qi once he took the throne.
Still bitter over the death of his son and fearful that he would be targeted in a wave of reduction of kingdom sizes that Emperor Jing carried out under the advice of Imperial Counselor Chao Cuo (d. 154 BCE), the King of Wu led a revolt against Han in 154 BCE as the head of a coalition with six other rebelling kingdoms: Chu, Zhao, Jiaoxi, Jiaodong, Zaichuan, and Jinan, which also feared such reductions. However, Han forces commanded by Zhou Yafu were ready and able to put down the revolt, destroying the coalition of seven states against Han. Several kingdoms were abolished (although later reinstated) and others significantly reduced in size. Emperor Jing issued an edict in 145 BCE which outlawed the independent administrative staffs in the kingdoms and abolished all their senior offices except for the chancellor, who was henceforth reduced in status and appointed directly by the central government. His successor Emperor Wu would diminish their power even further by abolishing the kingdoms' tradition of primogeniture and ordering that each king had to divide up his realm between all of his male heirs.
Relations with the Xiongnu
In 177 BCE, the Xiongnu Wise King of the Right raided the non-Chinese tribes living under Han protection in the northwest (modern Gansu). In 176 BCE, Modu Chanyu sent a letter to Emperor Wen informing him that the Wise King, allegedly insulted by Han officials, acted without the Chanyu's permission and so he punished the Wise King by forcing him to conduct a military campaign against the nomadic Yuezhi. Yet this event was merely part of a larger effort to recruit nomadic tribes north of Han China, during which the bulk of the Yuezhi were expelled from the Hexi Corridor (fleeing west into Central Asia) and the sedentary state of Loulan[disambiguation needed] in the Lop Nur salt marsh, the nomadic Wusun of the Tian Shan range, and twenty-six other states east of Samarkand were subjugated to Xiongnu hegemony. Modu Chanyu's implied threat that he would invade China if the heqin agreement was not renewed sparked a debate in Chang'an; although officials such as Chao Cuo and Jia Yi (d. 169 BCE) wanted to reject the heqin policy, Emperor Wen favored renewal of the agreement. Modu Chanyu died before the Han tribute reached him, but his successor Laoshang Chanyu (174–160 BCE) renewed the heqin agreement and negotiated the opening of border markets. Lifting the ban on trade significantly reduced the frequency and size of Xiongnu raids, which had necessitated tens of thousands of Han troops to be stationed at the border. However, Laoshang Chanyu and his successor Junchen Chanyu (車臣) (r. 160–126 BCE) continued to violate Han's territorial sovereignty by making incursions despite the treaty. While Laoshang Chanyu continued the conquest of his father by driving the Yuezhi into the Ili River valley, the Han quietly built up its strength in cavalry forces to later challenge the Xiongnu.
Reign of Wu
Confucianism and government recruitment
Although Emperor Gaozu did not ascribe to the philosophy and system of ethics attributed to Confucius (fl. 6th century BCE), he did enlist the aid of Confucians such as Lu Jia and Shusun Tong (叔孫通); in 196 BCE he established the first Han regulation for recruiting men of merit into government service, which Robert P. Kramer calls the "first major impulse toward the famous examination system." Emperors Wen and Jing appointed Confucian academicians to court, yet not all academicians at their courts specialized in what would later become orthodox Confucian texts. For several years after Liu Che took the throne in 141 BCE (known posthumously as Emperor Wu), the Grand Empress Dowager Dou continued to dominate the court and did not accept any policy which she found unfavorable or contradicted Huang-Lao ideology. After her death in 135 BCE, a major shift occurred in Chinese political history.
After Emperor Wu called for the submission of memorial essays on how to improve the government, he favored that of the official Dong Zhongshu (179–104 BCE), a philosopher whom Kramers calls the first Confucian "theologian". Dong's synthesis fused together the ethical ideas of Confucius with the cosmological beliefs in yin and yang and Five Elements or Wuxing by fitting them into the same holistic, universal system which governed heaven, earth, and the world of man. Moreover, it justified the imperial system of government by providing it its place within the greater cosmos. Reflecting the ideas of Dong Zhongshu, Emperor Wu issued an edict in 136 BCE that abolished academic chairs other than those focused on the Confucian Five Classics. In 124 BCE Emperor Wu established the Imperial University, at which the academicians taught 50 students; this was the incipient beginning of the civil service examination system refined in later dynasties. Although sons and relatives of officials were often privileged with nominations to office, those who did not come from a family of officials were not barred from entry into the bureaucracy. Rather, education in the Five Classics became the paramount prerequisite for gaining office; as a result, the Imperial University was expanded dramatically by the 2nd century CE when it accommodated 30,000 students. With Cai Lun's (d. 121 CE) invention of the papermaking process in 105 CE, the spread of paper as a cheap writing medium from the Eastern Han period onwards increased the supply of books and hence the number of those who could be educated for civil service.
War against the Xiongnu
The death of Empress Dou also marked a significant shift in foreign policy. In order to address the Xiongnu threat and renewal of the heqin agreement, Emperor Wu called a court conference into session in 135 BCE where two factions of leading ministers debated the merits and faults of the current policy; Emperor Wu followed the majority consensus of his ministers that peace should be maintained. A year later, while the Xiongnu were busy raiding the northern border and waiting for Han's response, Wu had another court conference assembled. The faction supporting war against the Xiongnu was able to sway the majority opinion by making a compromise for those worried about stretching financial resources on an indefinite campaign: in a limited engagement along the border near Mayi, Han forces would lure Junchen Chanyu over with gifts and promises of defections in order to quickly eliminate him and cause political chaos for the Xiongnu. When the Mayi trap failed in 133 BCE (Junchen Chanyu realized he was about to fall into a trap and fled back north), the era of heqin-style appeasement was broken and the Han court resolved to engage in full-scale war.
Leading campaigns involving tens of thousands of troops, in 127 BCE the Han general Wei Qing (d. 106 BCE) recaptured the Ordos Desert region from the Xiongnu and in 121 BCE Huo Qubing (d. 117 BCE) expelled them from the Qilian Mountains, gaining the surrender of many Xiongnu aristocrats. At the Battle of Mobei in 119 BCE, generals Wei and Huo led the campaign to the Khangai Mountains where they forced the chanyu to flee north of the Gobi Desert. The maintenance of 300,000 horses by government slaves in thirty-six different pasture lands was not enough to satisfy the cavalry and baggage trains needed for these campaigns, so the government offered exemption from military and corvée labor for up to three male members of each household who presented a privately bred horse to the government.
Expansion and colonization
After Xiongnu's King Hunye surrendered to Huo Qubing in 121 BCE, the Han acquired a territory stretching from the Hexi Corridor to Lop Nur, thus cutting the Xiongnu off from their Qiang allies. New commanderies were established in the Ordos as well as four in the Hexi Corridor—Jiuquan, Zhangyi, Dunhuang, and Wuwei—which were populated with Han settlers after a major Qiang-Xiongnu allied force was repelled from the region in 111 BCE. By 119 BCE, Han forces established their first garrison outposts in the Juyan Lake Basin of Inner Mongolia, with larger settlements built there after 110 BCE. Roughly 40% of the settlers at Juyan came from the Guandong region of modern Henan, western Shandong, southern Shanxi, southern Hebei, northwestern Jiangsu, and northwestern Anhui. After Hunye's surrender, the Han court moved 725,000 people from the Guandong region to populate the Xinqinzhong (新秦中) region south of the bend of the Yellow River. In all, Emperor Wu's forces conquered roughly 4.4 million km2 (1.7 million mi2) of new land, by far the largest territorial expansion in Chinese history. Self-sustaining agricultural garrisons were established in these frontier outposts to support military campaigns as well as secure trade routes leading into Central Asia, the eastern terminus of the Silk Road. The Han-era Great Wall was extended as far west as Dunhuang and sections of it still stand today in Gansu, including thirty Han beacon towers and two fortified castles.
Exploration, foreign trade, war and diplomacy
Starting in 139 BCE, the Han diplomat Zhang Qian traveled west in an unsuccessful attempt to secure an alliance with the Da Yuezhi (who were evicted from Gansu by the Xiongnu in 177 BCE); however, Zhang's travels revealed entire countries which the Chinese were unaware of, the remnants of the conquests of Alexander the Great (r. 336–323 BCE). When Zhang returned to China in 125 BCE, he reported on his visits to Dayuan (Fergana), Kangju (Sogdiana), and Daxia (Bactria, formerly the Greco-Bactrian Kingdom which was subjugated by the Da Yuezhi). Zhang described Dayuan and Daxia as agricultural and urban countries like China, and although he did not venture there, described Shendu (the Indus River valley of Northwestern India) and Anxi (Arsacid territories) further west. Envoys sent to these states returned with foreign delegations and lucrative trade caravans; yet even before this, Zhang noted that these countries were importing Chinese silk. After interrogating merchants, Zhang also discovered a southwestern trade route leading through Burma and on to India. The earliest known Roman glassware found in China (but manufactured in the Roman Empire) is a glass bowl found in a Guangzhou tomb dating to the early 1st century BCE and perhaps came from a maritime route passing through the South China Sea. Likewise, imported Chinese silk attire became popular in the Roman Empire by the time of Julius Caesar (100–44 BCE).
After the heqin agreement broke down, the Xiongnu were forced to extract more crafts and agricultural foodstuffs from the subjugated Tarim Basin urban centers. From 115–60 BCE the Han and Xiongnu battled for control and influence over these states, with the Han gaining, from 108–101 BCE tributary submission of Loulan[disambiguation needed], Turpan, Bügür, Dayuan (Fergana), and Kangju (Sogdiana). The farthest-reaching and most expensive invasion was Li Guangli's four-year campaign against Fergana in the Syr Darya and Amu Darya valleys (modern Uzbekistan and Kyrgyzstan). Historian Laszlo Torday (1997) asserts that Fergana threatened to cut off Han's access to the Silk Road, yet historian Sima Qian (d. 86 BCE) downplayed this threat by asserting that Li's mission was really a means to punish Dayuan for not providing tribute of prized Central Asian stallions.
To the south, Emperor Wu assisted King Zhao Mo in fending off an attack by Minyue (in modern Fujian) in 135 BCE. After a pro-Han faction was overthrown at the court of Nanyue, Han naval forces conquered Nanyue in 111 BCE, bringing areas of modern Guangdong, Guangxi, Hainan Island, and northern Vietnam under Han control. Emperor Wu also launched an invasion into the Dian Kingdom of Yunnan in 109 BCE, subjugating its king as a tributary vassal, while later Dian rebellions in 86 BCE and 83 BCE, 14 CE (during Wang Mang's rule), and 42–45 CE were quelled by Han forces. Wu sent an expedition into what is now North Korea in 128 BCE, but this was abandoned two years later. In 108 BCE, another expedition established four commanderies there, only two of which (i.e. Xuantu Commandery and Lelang Commandery) remained after 82 BCE. Although there was some violent resistance in 108 BCE and irregular raids by Goguryeo and Buyeo afterwards, Chinese settlers conducted peaceful trade relations with native Koreans who lived largely independent of (but were culturally influenced by) the sparse Han settlements.
Economic reforms
To fund his prolonged military campaigns and colonization efforts, Emperor Wu turned away from the "nonaction" policy of earlier reigns by having the central government commandeer the private industries and trades of salt mining and iron manufacturing by 117 BCE. Another government monopoly over liquor was established in 98 BCE, but the majority consensus at a court conference in 81 BCE led to the breaking up of this monopoly. The mathematician and official Sang Hongyang (d. 80 BCE), who later became Imperial Counselor and one of many former merchants drafted into the government to help administer these monopolies, was responsible for the 'equable transportation' system that eliminated price variation over time from place to place. This was a government means to interfere in the profitable grain trade by eliminating speculation (since the government stocked up on grain when cheap and sold it to the public at a low price when private merchants demanded higher ones). This along with the monopolies were criticized even during Wu's reign as bringing unnecessary hardships for merchants' profits and farmers forced to rely on poor-quality government-made goods and services; the monopolies and equable transportation did not last into the Eastern Han Era (25–220 CE).
During Emperor Wu's reign, the poll tax for each minor aged three to fourteen was raised from 20 to 23 coins; the rate for adults remained at 120. New taxes exacted on market transactions, wheeled vehicles, and properties were meant to bolster the growing military budget. In 119 BCE a new bronze coin weighing five shu (3.2 g/0.11 oz)—replacing the four shu coin—was issued by the government (remaining the standard coin of China until the Tang Dynasty), followed by a ban on private minting in 113 BCE. Earlier attempts to ban private minting took place in 186 and 144 BCE, but Wu's monopoly over the issue of coinage remained in place throughout the Han (although its stewardship changed hands between different government agencies). From 118 BCE to 5 CE, the Han government minted 28 billion coins, an average of 220 million a year.
Latter half of Western Han
Regency of Huo Guang
Emperor Wu's first wife, Empress Chen Jiao, was deposed in 130 BCE after allegations that she attempted witchcraft to help her produce a male heir. In 91 BCE, similar allegations were made against Emperor Wu's Crown Prince Liu Ju, the son of Emperor Wu's second wife Empress Wei Zifu. Prince Liu Ju, in fear of Emperor Wu's believing the false allegations, began a rebellion in Chang'an which lasted for five days, while Emperor Wu was away at his quiet summer retreat of Ganquan (甘泉; in modern Shaanxi),. After Liu Ju's defeat, both he and Empress Wei committed suicide.
Eventually, due to his good reputation, Huo Qubing's half-brother Huo Guang was entrusted by Wu to form a triumvirate regency alongside ethnically Xiongnu Jin Midi (d. 86 BCE) and Shangguan Jie (上官桀) (d. 80 BCE) over the court of his successor, the child Liu Fuling, known posthumously as Emperor Zhao of Han (r. 87–74 BCE). Jin Midi died a year later and by 80 BCE Shangguan Jie and Imperial Counselor Sang Hongyang were executed when they were accused of supporting Emperor Zhao's older brother Liu Dan (劉旦) the King of Yan as emperor; this gave Huo unrivaled power. However, he did not abuse his power in the eyes of the Confucian establishment and gained popularity for reducing Emperor Wu's taxes.
Emperor Zhao died in 74 BCE without a successor, while the one chosen to replace him on July 18, his nephew Prince He of Changyi, was removed on August 14 after displaying a lack of character or capacity to rule. Prince He's removal was secured with a petition signed by all the leading ministers and submitted to Empress Dowager Shangguan for approval. Liu Bingyi (Liu Ju's grandson) was named Emperor Xuan of Han (r. 74–49 BCE) on September 10. Huo Guang remained in power as regent over Emperor Xuan until he died of natural causes in 68 BCE. Yet in 66 BCE the Huo clan was charged with conspiracy against the throne and eliminated. This was the culmination of Emperor Xuan's revenge after Huo Guang's wife had poisoned his beloved Empress Xu Pingjun in 71 BCE only to have her replaced by Huo Guang's daughter Empress Huo Chengjun (the latter was deposed in September 66 BCE). Liu Shi, son of Empress Xu, succeeded his father as Emperor Yuan of Han (r. 49–33 BCE).
Reforms and frugality
During Emperor Wu's reign and Huo Guang's regency, the dominant political faction was the Modernist Party. This party favored greater government intervention in the private economy with government monopolies over salt and iron, higher taxes exacted on private business, and price controls which were used to fund an aggressive foreign policy of territorial expansion; they also followed the Qin Dynasty approach to discipline by meting out more punishments for faults and less rewards for service. After Huo Guang's regency, the Reformist Party gained more leverage over state affairs and policy decisions. This party favored the abolishment of government monopolies, limited government intervention in the private economy, a moderate foreign policy, limited colonization efforts, frugal budget reform, and a return to the Zhou Dynasty ideal of granting more rewards for service to display the dynasty's magnanimity. This party's influence can be seen in the abolition of the central government's salt and iron monopolies in 44 BCE, yet these were reinstated in 41 BCE, only to be abolished again during the 1st century CE and transferred to local administrations and private entrepreneurship. By 66 BCE the Reformists had many of the lavish spectacles, games, and entertainments installed by Emperor Wu to impress foreign dignitaries cancelled on the grounds that they were excessive and ostentatious.
Spurred by alleged signs from Heaven warning the ruler of his incompetence, a total of eighteen general amnesties were granted during the combined reigns of Emperor Yuan (Liu Shi) and Emperor Cheng of Han (r. 37-3 BCE, Liu Ao 劉驁). Emperor Yuan reduced the severity of punishment for several crimes, while Cheng reduced the length of judicial procedures in 34 BCE since they were disrupting the lives of commoners. While the Modernists had accepted sums of cash from criminals to have their sentences commuted or even dropped, the Reformists reversed this policy since it favored the wealthy over the poor and was not an effective deterrent against crime.
Emperor Cheng made major reforms to state-sponsored religion. The Qin Dynasty had worshipped four main legendary deities, with another added by Emperor Gaozu in 205 BCE; these were the Five Powers, or Wudi (五帝). In 31 BCE Emperor Cheng, in an effort to gain Heaven's favor and bless him with a male heir, halted all ceremonies dedicated to the Five Powers and replaced them with ceremonies for the supreme god Shangdi, who the kings of Zhou had worshipped.
Foreign relations and war
The first half of the 1st century BCE witnessed several succession crises for the Xiongnu leadership, allowing Han to further cement its control over the Western Regions. The Han general Fu Jiezi assassinated the pro-Xiongnu King of Loulan in 77 BCE. The Han formed a coalition with the Wusun, Dingling, and Wuhuan, and the coalition forces inflicted a major defeat against the Xiongnu in 72 BCE. The Han regained its influence over the Turpan Depression after defeating the Xiongnu at the Battle of Jushi in 67 BCE. In 65 BCE Han was able to install a new King of Kucha (a state north of the Taklamakan Desert) who would be agreeable to Han interests in the region. The office of the Protectorate of the Western Regions, first given to Zheng Ji (d. 49 BCE), was established in 60 BCE to supervise colonial activities and conduct relations with the small kingdoms of the Tarim Basin.
After Zhizhi Chanyu (r. 56–36 BCE) had inflicted a serious defeat against his rival brother and royal contender Huhanye Chanyu (呼韓邪) (r. 58–31 BCE), Huhanye and his supporters debated whether to request Han aid and become a Han vassal. He decided to do so in 52 BCE. Huhanye sent his son as a hostage to Han and personally paid homage to Emperor Xuan during the 51 BCE Chinese New Year celebration. Under the advocacy of the Reformists, Huhanye was seated as a distinguished guest of honor and rich rewards of 5 kg (160 oz t) of gold, 200,000 cash coins, 77 suits of clothes, 8,000 bales of silk fabric, 1,500 kg (3,300 lb) of silk floss, and 15 horses, in addition to 680,000 L (19,300 U.S. bu) of grain sent to him when he returned home.
Huhanye Chanyu and his successors were encouraged to pay further trips of homage to the Han court due to the increasing amount of gifts showered on them after each visit; this was a cause for complaint by some ministers in 3 BCE, yet the financial consequence of pampering their vassal was deemed superior to the heqin agreement. Zhizhi Chanyu initially attempted to send hostages and tribute to the Han court in hopes of ending the Han support of Huhanye, but eventually turned against Han. Subsequently, the Han general Chen Tang and Protector General Gan Yanshou (甘延壽/甘延寿), acting without explicit permission from the Han court, killed Zhizhi at his capital of Shanyu City (in modern Taraz, Kazakhstan) in 36 BCE. The Reformist Han court, reluctant to award independent missions let alone foreign interventionism, gave Chen and Gan only modest rewards. Despite the show of favor, Huhanye was not given a Han princess; instead, he was given the Lady Wang Zhaojun, one of the Four Beauties of ancient China. This marked a departure from the earlier heqin agreement, where a Chinese princess was handed over to the Chanyu as his bride.
Wang Mang's usurpation
Wang Mang seizes control
The long life of Empress Wang Zhengjun (71 BCE–13 CE), wife of Emperor Yuan and mother to Emperor Cheng, ensured that her male relatives would be appointed one after another to the role of regent, officially known as Commander-in-Chief. Emperor Cheng, who was more interested in cockfighting and chasing after beautiful women than administering the empire, left much of the affairs of state to his relatives of the Wang clan. On November 28, 8 BCE Wang Mang (45 BCE–23 CE), a nephew of Empress Dowager Wang, became the new General-in-Chief. However, when Emperor Ai of Han (r. 7–1 BCE, Liu Xin) took the throne, his grandmother Consort Fu (Emperor Yuan's concubine) became the leading figure in the palace and forced Wang Mang to resign on August 27, 7 BCE, followed by his forced departure from the capital to his marquessate in 5 BCE.
Due to pressure from Wang's supporters, Emperor Ai invited Wang Mang back to the capital in 2 BCE. A year later Emperor Ai died of illness without a son. Wang Mang was reinstated as regent over Emperor Ping of Han (r. 1 BCE – 6 CE, Liu Jizi), a first cousin of the former emperor. Although Wang had married his daughter to Emperor Ping, the latter was still a child when he died in 6 CE. In July of that year, Grand Empress Dowager Wang confirmed Wang Mang as acting emperor (jiahuangdi 假皇帝) and the child Liu Ying as his heir to succeed him, despite the fact that a Liu family marquess had revolted against Wang a month earlier, followed by others who were outraged that he was assuming greater power than the imperial Liu family. These rebellions were quelled and Wang Mang promised to hand over power to Liu Ying when he reached his majority. Despite promises to relinquish power, Wang initiated a propaganda campaign to show that Heaven was sending signals that it was time for Han's rule to end. On January 10, 9 CE he announced that Han had run its course and accepted the requests that he proclaim himself emperor of the Xin Dynasty (9–23 CE).
Traditionalist reforms
Wang Mang had a grand vision to restore China to a fabled golden age achieved in the early Zhou Dynasty, the era which Confucius had idealized. He attempted sweeping reforms, including the outlawing of slavery and institution of the King's Fields system in 9 CE, nationalizing land ownership and allotting a standard amount of land to each family. Slavery was reestablished and the land reform regime was cancelled in 12 CE due to widespread protest.
The historian Ban Gu (32–92 CE) wrote that Wang's reforms led to his downfall, yet aside from slavery and land reform, historian Hans Bielenstein points out that most of Wang's reforms were in line with earlier Han policies. Although his new denominations of currency introduced in 7 CE, 9 CE, 10 CE, and 14 CE debased the value of coinage, earlier introductions of lighter-weight currencies resulted in economic damage as well. Wang renamed all the commanderies of the empire as well as bureaucratic titles, yet there were precedents for this as well. The government monopolies were rescinded in 22 CE because they could no longer be enforced during a large-scale rebellion against him (spurred by massive flooding of the Yellow River).
Foreign relations under Wang
The half-Chinese, half-Xiongnu noble Yituzhiyashi (伊屠智牙師), son of Huhanye Chanyu and Wang Zhaojun, became a vocal partisan for Han China within the Xiongnu realm; Bielenstein claims that this led conservative Xiongnu nobles to anticipate a break in the alliance with Han. The moment came when Wang Mang assumed the throne and demoted the Chanyu to a lesser rank; this became a pretext for war. During the winter of 10–11 CE, Wang amassed 300,000 troops along the northern border of Han China, a show of force which led the Xiongnu to back down. Yet when raiding continued, Wang Mang had the princely Xiongnu hostage held by Han authorities executed. Diplomatic relations were repaired when Xian (咸) (r. 13–18 CE) became the chanyu, only to be soiled again when Huduershi Chanyu (呼都而尸) (r. 18–46 CE) took the throne and raided Han's borders in 19 CE.
The Tarim Basin kingdom of Yanqi (Karasahr, located east of Kucha, west of Turpan) rebelled against Xin authority in 13 CE, killing Han's Protector General Dan Qin (但欽). Wang Mang sent a force to retaliate against Karasahr in 16 CE, quelling their resistance and ensuring that the region would remain under Chinese control until the widespread rebellion against Wang Mang toppled his rule in 23 CE. Wang also extended Chinese influence over Tibetan tribes in the Kokonor region and fended off an attack in 12 CE by Goguryeo (an early Korean state located around the Yalu River) in the Korean peninsula. However, as the widespread rebellion in China mounted from 20–23 CE, the Koreans raided Lelang Commandery and Han did not reassert itself in the region until 30 CE.
Restoration of the Han
Natural disaster and civil war
Before 3 CE, the course of the Yellow River had emptied into the Bohai Sea at Tianjin, but the gradual build up of silt in its riverbed—which raised the water level each year—overpowered the dikes built to prevent flooding and the river split in two, with one arm flowing south of the Shandong Peninsula and into the East China Sea. A second flood in 11 CE changed the course of the northern branch of the river so that it emptied slightly north of the Shandong Peninsula, yet far south of Tianjin. With much of the southern North China Plain inundated following the creation of the Yellow River's southern branch, thousands of starving peasants who were displaced from their homes formed groups of bandits and rebels, most notably the Red Eyebrows. Wang Mang's armies tried to quell these rebellions in 18 and 22 CE but failed.
Liu Yan (d. 23 CE), a descendant of Emperor Jing, led a group of rebelling gentry groups from Nanyang who had Yan's third cousin Liu Xuan (劉玄) accept the title Emperor Gengshi of Han (r. 23–25) on March 11, 23 CE. Liu Xiu, a brother of Liu Yan and future Emperor Guangwu of Han (r. 25–57 CE), distinguished himself at the Battle of Kunyang on July 7, 23 CE when he relieved a city sieged by Wang Mang's forces and turned the tide of the war. Soon afterwards, Emperor Gengshi had Liu Yan executed on grounds of treason and Liu Xiu, fearing for his life, resigned from office as Minister of Ceremonies and avoided public mourning for his brother; for this, the emperor gave Liu Xiu a marquessate and a promotion as general.
Gengshi's forces then targeted Chang'an, but a local insurgency broke out in the capital. From October 4–6 Wang Mang made a last stand at the Weiyang Palace only to be killed and decapitated; his head was sent to Gengshi's headquarters at Wan (i.e., Nanyang) before Gengshi's armies even reached Chang'an on October 9. Emperor Gengshi settled Luoyang as his new capital where he invited Red Eyebrows leader Fan Chong (樊崇) to stay, yet Gengshi granted him only honorary titles, so Fan decided to flee once his men began to desert him. Gengshi moved the capital back to Chang'an in 24 CE, yet in the following year the Red Eyebrows defeated his forces, appointed their own puppet ruler Liu Penzi, entered Chang'an and captured the fleeing Gengshi who they demoted as King of Changsha before killing him.
Reconsolidation under Guangwu
While acting as a commissioner under Emperor Gengshi, Liu Xiu gathered a significant following after putting down a local rebellion (in what is now Hebei province). He claimed the Han throne himself on August 5, 25 CE and occupied Luoyang as his capital on November 5. Before he would eventually unify the empire, there were 11 others who claimed the title of emperor. With the efforts of his officers Deng Yu and Feng Yi, Guangwu forced the wandering Red Eyebrows to surrender on March 15, 27 CE, resettling them at Luoyang, yet had their leader Fan Chong executed when a plot of rebellion was revealed.
From 26–30 CE, Guangwu defeated various warlords and conquered the Central Plain and Shandong Peninsula in the east. Allying with the warlord Dou Rong (竇融) of the distant Hexi Corridor in 29 CE, Guangwu nearly defeated the Gansu warlord Wei Xiao (隗囂/隗嚣) in 32 CE, seizing Wei's domain in 33 CE. The last adversary standing was Gongsun Shu (公孫述), whose base was at Chengdu in modern Sichuan. Although Guangwu's forces successfully burned down Gongsun's fortified pontoon bridge stretching across the Yangzi River, Guangwu's commanding general Cen Peng (岑彭) was killed in 35 CE by an assassin sent by Gongsun Shu. Nevertheless, Han General Wu Han (d. 44 CE) resumed Cen's campaign along the Yangzi and Min rivers and destroyed Gongsun's forces by December 36 CE.
Since Chang'an is located west of Luoyang, the names Western Han (202 BCE – 9 CE) and Eastern Han (25–220 CE) are accepted by historians. Luoyang's 10 m (32 ft) tall eastern, western, and northern walls still stand today, although the southern wall was destroyed when the Luo River changed its course. Within its walls it had two prominent palaces, both of which existed during Western Han, but were expanded by Guangwu and his successors. While Eastern Han Luoyang is estimated to have held roughly 500,000 inhabitants, the first known census data for the whole of China, dated 2 CE, recorded a population of nearly 58 million. Comparing this to the census of 140 CE (when the total population was registered at roughly 48 million), there was a significant migratory shift of up to 10 million people from northern to southern China during Eastern Han, largely because of natural disasters and wars with nomadic groups in the north. Population size fluctuated according to periodically updated Eastern-Han censuses, but historian Sadao Nishijima notes that this does not reflect a dramatic loss of life, but rather government inability at times to register the entire populace.
Policies under Guangwu, Ming, Zhang, and He
Scrapping Wang Mang's denominations of currency, Emperor Guangwu reintroduced Western Han's standard five shu coin in 40 CE. Making up for lost revenue after the salt and iron monopolies were canceled, private manufacturers were heavily taxed while the government purchased its armies' swords and shields from private businesses. In 31 CE he allowed peasants to pay a military substitution tax to avoid conscription into the armed forces for a year of training and year of service; instead he built a volunteer force which lasted throughout Eastern Han. He also allowed peasants to avoid the one-month corvée duty with a commutable tax as hired labor became more popular. Wang Mang had demoted all Han marquesses to commoner status, yet Guangwu made an effort from 27 CE onwards to find their relatives and restore abolished marquessates.
Emperor Ming of Han (r. 57–75 CE, Liu Yang) reestablished the Office for Price Adjustment and Stabilization and the price stabilization system where the government bought grain when cheap and sold it to the public when private commercial prices were high due to limited stocks. However, he canceled the prize stabilization scheme in 68 CE when he became convinced that government hoarding of grain only made wealthy merchants even richer. With the renewed economic prosperity brought about by his father's reign, Emperor Ming addressed the flooding of the Yellow River by repairing various dams and canals. On April 8, 70 CE, an edict boasted that the southern branch of the Yellow River emptying south of the Shandong Peninsula was finally cut off by Han engineering. A patron of scholarship, Emperor Ming also established a school for young nobles aside from the Imperial University.
Emperor Zhang of Han (r. 75–88 CE, Liu Da) faced an agrarian crisis when a cattle epidemic broke out in 76 CE. In addition to providing disaster relief, Zhang also made reforms to legal procedures and lightened existing punishments with the bastinado, since he believed that this would restore the seasonal balance of yin and yang and cure the epidemic. To further display his benevolence, in 78 CE he ceased the corvée work on canal works of the Hutuo River running through the Taihang Mountains, believing it was causing too much hardship for the people; in 85 CE he granted a three-year poll tax exemption for any woman who gave birth and exempted their husbands for a year. Unlike other Eastern Han rulers who sponsored the New Texts tradition of the Confucian Five Classics, Zhang was a patron of the Old Texts tradition and held scholarly debates on the validity of the schools. Rafe de Crespigny writes that the major reform of the Eastern Han period was Zhang's reintroduction in 85 CE of an amended Sifen calendar, replacing Emperor Wu's Taichu calendar of 104 BCE which had become inaccurate over two centuries (the former measured the tropical year at 365.25 days like the Julian Calendar, while the latter measured the tropical year at 365385⁄1539 days and the lunar month at 2943⁄81 days).
Emperor He of Han (r. 88–105 CE, Liu Zhao) was tolerant of both New Text and Old Text traditions, though orthodox studies were in decline and works skeptical of New Texts, such as Wang Chong's (27 – c. 100 CE) Lunheng, disillusioned the scholarly community with that tradition. He also showed an interest in history when he commissioned the Lady Ban Zhao (45–116 CE) to use the imperial archives in order to complete the Book of Han, the work of her deceased father and brother. This set an important precedent of imperial control over the recording of history and thus was unlike Sima Qian's far more independent work, the Records of the Grand Historian (109–91 BCE). When plagues of locusts, floods, and earthquakes disrupted the lives of commoners, Emperor He's relief policies were to cut taxes, open granaries, provide government loans, forgive private debts, and resettle people away from disaster areas. Believing that a severe drought in 94 CE was the cosmological result of injustice in the legal system, Emperor He personally inspected prisons. When he found that some had false charges levelled against them, he sent the Prefect of Luoyang to prison; rain allegedly came soon afterwards.
Foreign relations and split of the Xiongnu realm
The Vietnamese Trưng Sisters led an uprising in the Red River Delta of Jiaozhi Commandery in 40 CE. Guangwu sent the elderly general Ma Yuan (~14 BCE – 49 CE), who defeated them in 42–43 CE. The sisters' native Dong Son drums were melted down and recast into a large bronze horse statue presented to Guangwu at Luoyang.
Meanwhile, Huduershi Chanyu was succeeded by his son Punu (蒲奴) in 46 CE, thus breaking Huhanye's orders that only a Xiongnu ruler's brother was a valid successor; Huduershi's nephew Bi (比) was outraged and in 48 CE was proclaimed a rival Chanyu. This split created the Northern Xiongnu and Southern Xiongnu, and like Huhanye before him, Bi turned to the Han for aid in 50 CE. When Bi came to pay homage to the Han court, he was given 10,000 bales of silk fabrics, 2,500 kg (5,500 lb) of silk, 500,000 L (14,000 U.S. bu) of rice, and 36,000 head of cattle. Unlike in Huhanye's time, however, the Southern Xiongnu were overseen by a Han Prefect who not only acted as an arbiter in Xiongnu legal cases, but also monitored the movements of the Chanyu and his followers who were settled in Han's northern commanderies in Shanxi, Gansu, and Inner Mongolia. Northern Xiongnu attempts to enter Han's tributary system were rejected.
Following Xin's loss of the Western Territories, the Kingdom of Yarkand looked after the Chinese officials and families stranded in the Tarim Basin and fought the Xiongnu for control over it. Emperor Guangwu, preoccupied with civil wars in China, simply granted King Kang of Yarkand an official title in 29 CE and in 41 CE made his successor King Xian a Protector General (later reduced to the honorary title of "Great General of Han"). Yarkand overtaxed its subjects of Khotan, Turpan, Kucha, and Karasahr, all of which decided to ally with the Northern Xiongnu. By 61 CE Khotan had conquered Yarkand, yet this led to a war among the kingdoms to decide which would be the next hegemon. The Northern Xiongnu took advantage of the infighting, conquered the Tarim Basin, and used it as a base to stage raids into Han's Hexi Corridor by 63 CE. In that year, the Han court opened border markets for trade with the Northern Xiongnu in hopes to appease them.
Yet Han sought to reconquer the Tarim Basin. At the Battle of Yiwulu in 73 CE, Dou Gu (d. 88 CE) reached as far as Lake Barkol when he defeated a Northern Xiongnu chanyu and established an agricultural garrison at Hami. Although Dou Gu was able to evict the Xiongnu from Turpan in 74 CE, when the Han appointed Chen Mu (d. 75 CE) as the new Protector General of the Western Regions, the Northern Xiongnu invaded the Bogda Mountains while their allies Karasarh and Kucha killed Chen Mu and his troops. The Han garrison at Hami was forced to withdraw in 77 CE (and was not reestablished until 91 CE). The next Han expedition against the Northern Xiongnu was led in 89 CE by Dou Xian (d. 92 CE); at the Battle of Ikh Bayan, Dou's forces chased the Northern Chanyu into the Altai Mountains, allegedly killing 13,000 Xiongnu and accepting the surrender of 200,000 Xiongnu from 81 tribes.
After Dou sent 2,000 cavalry to attack the Northern Xiongnu base at Hami, he was followed by the initiative of the general Ban Chao (d. 102 CE), who earlier installed a new king of Kashgar as a Han ally. When this king turned against him and enlisted the aid of Sogdiana in 84 CE, Ban Chao arranged an alliance with the Kushan Empire (of modern North India, Pakistan, Afghanistan, and Tajikistan), which put political pressure on Sogdiana to back down; Ban later assassinated King Zhong of Kashgar. Since Kushan provided aid to Ban Chao in quelling Turpan and sent tribute and hostages to Han, its ruler Vima Kadphises (r. c. 90 – c. 100 CE) requested a Chinese princess bride; when this was rejected in 90 CE, Kushan marched 70,000 troops to Wakhan against Ban Chao. Ban used scorched earth tactics against Kushan, forcing them to request food supplies from Kucha. When Kushan messengers were intercepted by Ban, Kushan was forced to withdraw. In 91 CE, Ban was appointed as Protector General of the Western Regions, an office he filled until 101 CE.
Tributary gifts and emissaries from the Arsacid Empire, then under Pacorus II of Parthia (r. 78–105 CE), came to the Han in 87 CE, 89 CE, and 101 CE bringing exotic animals such as ostriches and lions. When Ban Chao dispatched his emissary Gan Ying in 97 CE to reach Daqin (the Roman Empire), he did not reach farther than a "large sea", perhaps the Persian Gulf. However, from oral accounts Gan was able to describe Rome as having hundreds of walled cities, a postal delivery network, the submission of dependent states, and a system of government where the Roman "king" (i.e. consul) is "not a permanent figure but is chosen as the man most worthy." Elephants and rhinoceroses were also presented as gifts to the Han court in 94 CE and 97 CE by a king in what is now Burma. The first known diplomatic mission from a ruler in Japan came in 57 CE (followed by another in 107 CE); a golden seal of Emperor Guangwu's was even discovered in 1784 in Chikuzen Province. The first mentioning of Buddhism in China was made in 65 CE, when the Chinese clearly associated it with Huang-Lao Daoism. Emperor Ming had the first Buddhist temple of China—the White Horse Temple—built at Luoyang in honor of two foreign monks: Jiashemoteng (迦葉摩騰) (Kāśyapa Mātanga) and Zhu Falan (竺法蘭) (Dharmaratna the Indian). These monks allegedly translated the Sutra of Forty-two Chapters from Sanskrit into Chinese, although it is now proven that this text was not translated into Chinese until the 2nd century CE.
Court, kinsmen, and consort clans
Besides his divorcing Empress Guo Shengtong in 41 CE to install his original wife Empress Yin Lihua as empress instead, there was little drama with imperial kinsmen at Guangwu's court, as Empress Guo was made a queen dowager and her son, the former heir apparent, was demoted to the status of a king. However, trouble with imperial kinsmen turned violent during Ming's reign. In addition to exiling his half-brother Liu Ying (d. 71 CE, committed suicide) after Liu Ying allegedly used witchcraft to curse him, Emperor Ming also targeted hundreds of others with similar charges (of using occult omens and witchcraft) resulting in exile, torture for gaining confessions, and execution. This trend of persecution did not end until Emperor Zhang took the throne, who was for the most part generous towards his brothers and called back many to the capital who had been exiled by Ming.
Of greater consequence for the dynasty, however, was Emperor He's coup of 92 CE in which eunuchs made their first significant involvement in court politics of Eastern Han. Emperor Zhang had upheld a good relationship with his titular mother and Ming's widow, the humble Empress Dowager Ma (d. 79 CE), but Empress Dowager Dou (d. 97 CE), the widow of Emperor Zhang, was overbearing towards Emperor He (son of Emperor Zhang and Consort Liang) in his early reign and, concealing the identity of his natural mother from him, raised He as her own after purging the Liang family from power. In order to put He on the throne, Empress Dowager Dou had even demoted the crown prince Liu Qing (78–106 CE) as a king and forced his mother, Consort Song (d. 82 CE) to commit suicide. Unwilling to yield his power to the Dou clan any longer, Emperor He enlisted the aid of palace eunuchs led by Zheng Zhong (d. 107 CE) to overthrow the Dou clan on charges of treason, stripping them of titles, exiling them, forcing many to commit suicide, and had the Empress Dowager placed under house arrest.
Middle age of Eastern Han
Empress Deng Sui, consort families, and eunuchs
Empress Deng Sui (d. 121 CE), widow to Emperor He, became empress dowager in 105 CE and thus had the final say in appointing He's successor (since he had appointed none); she placed his infant son Liu Long on the throne, later known as Emperor Shang of Han (r. 105–106). When the latter died at only age one, she placed his young nephew Liu Hu (Liu Qing's son) on the throne, known posthumously as Emperor An of Han (r. 106–125 CE), bypassing Emperor He's other son Liu Sheng (劉勝). With a young ruler on the throne, Empress Deng was the de facto ruler until her death, since her brother Deng Zhi's (鄧騭) brief occupation as the General-in-Chief (大將軍) from 109–110 CE did not in fact make him the ruling regent. With her death on April 17, 121 CE, Emperor An accepted the charge of eunuchs Li Run (李閏) and Jiang Jing (江京) that she had plotted to overthrow him; on June 3 he charged the Deng clan with treason and had them dismissed from office, stripped of title, reduced to commoner status, exiled to remote areas, and drove many to commit suicide.
The Yan clan of Empress Yan Ji (d. 126 CE), wife of Emperor An, and the eunuchs Jiang Jing and Fan Feng (樊豐) pressured Emperor An to demote his nine-year-old heir apparent Liu Bao to the status of a king on October 5, 124 CE on charges of conspiracy, despite protests from senior government officials. When Emperor An died on April 30, 125 CE the Empress Dowager Yan was free to choose his successor, Liu Yi (grandson of Emperor Zhang), who is known as Emperor Shao of Han. After the child died suddenly in 125 CE, the eunuch Sun Cheng (d. 132 CE) made a palace coup, slaughtering the opposing eunuchs, and thrust Liu Bao on the throne, later to be known as Emperor Shun of Han (r. 125–144 CE); Sun then put Empress Dowager Yan under house arrest, had her brothers killed, and the rest of her family exiled to Vietnam.
Emperor Shun had no sons with Empress Liang Na (d. 150 CE), yet when his son Liu Bing briefly took the throne in 145 CE, the mother of the latter, Consort Yu, was in no position of power to challenge Empress Dowager Liang. After the child Emperor Zhi of Han (r. 145–146 CE) briefly sat on the throne, Empress Dowager Liang and her brother Liang Ji (d. 159 CE), now regent General-in-Chief, decided that Liu Zhi, known posthumously as Emperor Huan of Han (r. 146–168 CE), should take the throne, as he was betrothed to their sister Liang Nüying. When the younger Empress Liang died in 159 CE, Liang Ji attempted to control Emperor Huan's new favorite Consort Deng Mengnü (later empress) (d. 165 CE). When she resisted Liang Ji had her brother-in-law killed, prompting Emperor Huan to use eunuchs to oust Liang Ji from power; the latter committed suicide when his residence was surrounded by imperial guards. Emperor Huan died with no official heir, so his third wife Empress Dou Miao (d. 172 CE), now the empress dowager, had Liu Hong, known posthumously as Emperor Ling of Han (r. 168–189 CE), take the throne.
Reforms and policies of middle Eastern Han
To mitigate the damage caused by a series of natural disasters, Empress Dowager Deng's government attempted various relief measures of tax remissions, donations to the poor, and immediate shipping of government grain to the most hard-hit areas. Although some water control works were repaired in 115 CE and 116 CE, many government projects became underfunded due to these relief efforts and the armed response to the large-scale Qiang people's rebellion of 107–118 CE. Aware of her financial constraints, the Empress Dowager limited the expenses at banquets, the fodder for imperial horses who weren't pulling carriages, and the amount of luxury goods manufactured by the imperial workshops. She approved the sale of some civil offices and even secondary marquess ranks to collect more revenue; the sale of offices was continued by Emperor Huan and became extremely prevalent during Emperor Ling's reign.
Emperor An continued similar disaster relief programs that Empress Dowager Deng had implemented, though he reversed some of her decisions, such as a 116 CE edict requiring officials to leave office for three years of mourning after the death of a parent (an ideal Confucian more). Since this seemed to contradict Confucian morals, Emperor An's sponsorship of renowned scholars was aimed at shoring up popularity among Confucians. Xu Shen (58–147 CE), although an Old Text scholar and thus not aligned with the New Text tradition sponsored by Emperor An, enhanced the emperor's Confucian credentials when he presented his groundbreaking dictionary to the court, the Shuowen Jiezi.
Financial troubles only worsened in Emperor Shun's reign, as many public works projects were handled at the local level without the central government's assistance. Yet his court still managed to supervise the major efforts of disaster relief, aided in part by a new invention in 132 CE of a seismometer by the court astronomer Zhang Heng (78–139 CE) who used a complex system of a vibration-sensitive swinging pendulum, mechanical gears, and falling metal balls to determine the direction of earthquakes hundreds of kilometers (miles) away. Shun's greatest patronage of scholarship was repairing the now dilapidated Imperial University in 131 CE, which still operated as a pathway for young gentrymen to enter civil service. Officials protested against the enfeoffment of eunuch Sun Cheng and his associates as marquesses, with further protest in 135 CE when Shun allowed the sons of eunuchs to inherit their fiefs, yet the larger concern was over the rising power of the Liang faction.
To abate the unseemly image of placing child emperors on the throne, Liang Ji attempted to paint himself as a populist by granting general amnesties, awarding people with noble ranks, reducing the severity of penalties (the bastinado was no longer used), allowing exiled families to return home, and allowing convicts to settle on new land in the frontier. Under his stewardship, the Imperial University was given a formal examination system whereby candidates would take exams on different classics over a period of years in order to gain entrance into public office. Despite these positive reforms, Liang Ji was widely accused of corruption and greed. Yet when Emperor Huan overthrew Liang by using eunuch allies, students of the Imperial University took to streets in the thousands chanting the names of the eunuchs they opposed in one of the earliest student protests in history.
After Liang Ji was overthrown, Huan distanced himself from the Confucian establishment and instead sought legitimacy through a revived imperial patronage of Huang-Lao Daoism; this renewed patronage of Huang-Lao was not continued after his reign. As the economy worsened, Huan built new hunting parks, imperial gardens, palace buildings, and expanded his harem to house thousands of concubines. The gentry class became alienated by Huan's corrupt government dominated by eunuchs and many refused nominations to serve in office, since current Confucian beliefs dictated that morality and personal relationships superseded public service. Emperor Ling hosted much less concubines than Huan, yet Ling left much of the affairs of state to his eunuchs. Instead, Ling busied himself play-acting as a traveling salesman with concubines dressed as market vendors or dressing in military costume as the 'General Supreme' for his parading Army of the Western Garden.
Foreign relations and war of middle Eastern Han
The Eastern-Han court colonized and periodically reasserted the Chinese military presence in the Western Regions only as a means to combat the Northern Xiongnu. Han forces were expelled from the Western Regions first by the Xiongnu between 77–90 CE and then by the Qiang between 107–122 CE. In both of these periods, the financial burdens of reestablishing and expanding western colonies, as well as the liability of sending financial aid requested by Tarim-Basin tributary states, were viewed by the court as reasons to forestall the reopening of foreign relations in the region.
At the beginning of Empress Dowager Deng's regency, the Protector General of the Western Regions Ren Shang (d. 118 CE) was besieged at Kashgar. Although he was able to break the siege, he was recalled and replaced before the Empress Dowager began to withdraw forces from the Western Regions in 107 CE. However, a transitional force was still needed. The Qiang people, who had been settled by the Han government in southeastern Gansu since Emperor Jing's reign, would aid Han in this withdrawal. Throughout Eastern Han, the Qiang often revolted against Han authority after Han border officials robbed them of goods and even women and children. A group of Qiang people conscripted to reinforce the Protector General during his withdrawal decided instead to mutiny against him. Their revolt in the northwestern province of Liang (涼州) was put down in 108 CE, but it spurred a greater Qiang rebellion that would last until 118 CE, cutting off Han's access to Central Asia. The Qiang problem was exacerbated in 109 CE by a combined Southern Xiongnu, Xianbei, and Wuhuan rebellion in the northeast. The total monetary cost for putting down the Qiang rebellion in Liang province was 24 million cash (out of an average of 220 million cash minted annually), while the people of three entire commanderies within eastern Liang province and one commandery within Bing province were temporarily resettled in 110 CE.
Following general Ban Yong's reopening of relations with the Western Regions in 123 CE, two of the Liang province commanderies were reestablished in 129 CE, only to be withdrawn again a decade later. Even after eastern Liang province (comprising modern southeastern Gansu and Ningxia) was resettled, there was another massive rebellion there in 184 CE, instigated by Han Chinese, Qiang, Xiongnu, and Yuezhi rebels. Yet the Tarim-Basin states continued to offer tribute and hostages to China into the final decade of Han, while the agricultural garrison at Hami was not gradually abandoned until after 153 CE.
Of perhaps greater consequence for the Han Dynasty and future dynasties was the ascendance of the Xianbei people. They filled the vacuum of power on the vast northern steppe after the Northern Xiongnu were defeated by Han and fled to the Ili River valley (in modern Kazakhstan) in 91 CE. The Xianbei quickly occupied the deserted territories and incorporated some 100,000 remnant Xiongnu families into their new federation, which by the mid 2nd century CE stretched from the western borders of the Buyeo Kingdom in Manchuria, to the Dingling in southern Siberia, and all the way west to the Ili River valley of the Wusun people. Although they raided Han in 110 CE to force a negotiation of better trade agreements, the later leader Tanshihuai (檀石槐) (d. 180 CE) refused kingly titles and tributary arrangements offered by Emperor Huan and defeated Chinese armies under Emperor Ling. When Tanshihuai died in 180 CE, the Xianbei Federation largely fell apart, yet it grew powerful once more during the 3rd century CE.
After being introduced in the 1st century CE, Buddhism became more popular in China during the 2nd century CE. The Parthian monk An Shigao traveled from Parthia to China in 148 CE and made translations of Buddhists works on the Hinayana and yoga practices which the Chinese associated with Daoist exercises. The Kushan monk Lokaksema from Gandhara was active in China from 178–198 CE, translated the Perfection of Wisdom, Shurangama Sutra, and Pratyutpanna Sutra, and introduced to China the concepts of Akshobhya Buddha, Amitābha Buddha (of Pure Land Buddhism), and teachings about Manjusri. In 166 CE, Emperor Huan made sacrifices to Laozi and the Buddha. In that same year, the Book of Later Han records that Romans reached China from the maritime south and presented gifts to Huan's court, claiming they represented Roman emperor Marcus Aurelius Antoninus (Andun 安敦) (r. 161–180 CE). Crespigny speculates that they were Roman merchants, not diplomats.
Decline of Eastern Han
Partisan Prohibitions
In 166 CE, the official Li Ying (李膺) was accused by palace eunuchs of plotting treason with students at the Imperial University and associates in the provinces who opposed the eunuchs. Emperor Huan was furious, arresting Li and his followers, who were only released from prison the following year due to pleas from the General-in-Chief Dou Wu (d. 168 CE) (Emperor Huan's father-in-law). However, Li Ying and hundreds of his followers were proscribed from holding any offices and were branded as partisans (黨人).
After Emperor Huan's death, at the urging of the Grand Tutor (太傅) Chen Fan (陳蕃) (d. 168 CE), Dou Wu presented a memorial to the court in June 168 CE denouncing the leading eunuchs as corrupt and calling for their execution, but Empress Dowager Dou refused the proposal. This was followed by a memorial presented by Chen Fan calling for the heads of Hou Lan (d. 172 CE) and Cao Jie (d. 181 CE), and when this too was refused Dou Wu took formal legal action which could not be ignored by the court. When Shan Bing, a eunuch associate of Chen and Dou's, gained a forced confession from another eunuch that Cao Jie and Wang Fu (王甫) plotted treason, he prepared another damning written memorial on the night of October 24–25 which the opposing eunuchs secretly opened and read. Cao Jie armed Emperor Ling with a sword and hid him with his wet nurse, while Wang Fu had Shan Bing killed and Empress Dowager Dou incarcerated so that the eunuchs could use the authority of her seal.
Chen Fan entered the palace with eighty followers and engaged in a shouting match with Wang Fu, yet Chen was gradually surrounded, detained, and later trampled to death in prison that day (his followers were unharmed). At dawn, the general Zhang Huan (張奐), misled by the eunuchs into believing that Dou Wu was committing treason, engaged in a shouting match with Dou Wu at the palace gates, but as Dou's followers slowly deserted him and trickled over to Zhang's side, Dou was forced to commit suicide. In neither of these confrontations did any actual physical fighting break out.
With Dou Wu eliminated and the Empress Dowager under house arrest, the eunuchs renewed the proscriptions against Li Ying and his followers; in 169 CE they had hundreds more officials and students prohibited from serving office, sent their families into exile, and had Li Ying executed. The eunuchs barred potential enemies from court, sold and bartered offices, and infiltrated the military command. Emperor Ling even referred to eunuchs Zhao Zhong and Zhang Rang as his "mother" and "father"; the latter two had so much influence over the emperor that they convinced him not to ascend to the top floors of tall towers in the capital, which was an effort to conceal from him the enormous mansions that the eunuchs built for themselves. Although the partisan prohibitions were extended to hundreds more in 176 CE (including the distant relatives of those earlier proscribed), they were abolished in 184 CE with the outbreak of the Yellow Turban Rebellion, largely because the court feared the gentry—bitter from their banishment from office—would join the rebel cause.
Yellow Turban Rebellion
In 142 CE, Zhang Daoling founded the Five Pecks of Rice religious society in Sichuan. After claiming to have seen the deified Laozi as a holy prophet who appointed him as his earthly representative known as the Celestial Master, Zhang created a highly organized, hierarchical Daoist movement which accepted only pecks of rice and no money from its lay followers. In 184 CE, the Five Pecks of Rice under Zhang Lu staged a rebellion in Sichuan and set up a theocratic Daoist state that endured until 215 CE.
Like the Five Pecks of Rice, the Yellow Turban Daoists of the Yellow and Huai River regions also built a hierarchical church and believed that illness was the result of personal sins needing confessions. The Yellow Turbans became a militant organization that challenged Han authority by claiming they would bring about a utopian era of peace. Zhang Jue, renowned faith-healer and leader of the Yellow Turbans, and his hundreds of thousands of followers, designated by the yellow cloth that they wrapped around their foreheads, led a rebellion across eight provinces in 184 CE. They had early successes against imperial troops but by the end of 184 CE the Yellow Turban leadership—including Zhang—had been killed. Smaller groups of Yellow Turbans continued to revolt in the following years (until the last large group was incorporated into the forces of Chancellor Cao Cao in 192 CE), yet Crespigny asserts that the rebellion's impact on the fall of Han was less consequential than events which transpired in the capital following the death of Emperor Ling on May 13, 189 CE. However, Patricia Ebrey points out that many of the generals who raised armies to quell the rebellion never disbanded their forces and used them to amass their own power outside of imperial authority.
Downfall of the eunuchs
He Jin (d. 189 CE), half-brother to Empress He (d. 189 CE), was given authority over the standing army and palace guards when appointed as General-in-Chief during the Yellow Turban Rebellion. Shortly after Empress He's son Liu Bian, known later as Emperor Shao of Han, was put on the throne, the eunuch Jian Shi plotted against He Jin, was discovered, and executed on May 27, 189 CE; He Jin thus took over Jian's Army of the Western Garden. Yuan Shao (d. 202 CE), then an officer in the Army of the Western Garden, plotted with He Jin to overthrow the eunuchs by secretly ordering several generals to march towards the capital and forcefully persuade the Empress Dowager He to hand over the eunuchs. Yuan had these generals send in petition after petition to the Empress Dowager calling for the eunuchs' dismissal; Mansvelt Beck states that this "psychological war" finally broke the Empress Dowager's will and she consented. However, the eunuchs discovered this and used Empress Dowager He's mother Lady Wuyang and her brother He Miao (何苗), both of whom were sympathetic to the eunuchs, to have the order rescinded. On September 22, the eunuchs learned that He Jin had a private conversation with the Empress Dowager about executing them. They sent message to He Jin that the Empress Dowager had more words to share with him; once he sat down in the hall to meet her, eunuchs rushed out of hiding and beheaded He Jin. When the eunuchs ordered the imperial secretaries to draft an edict dismissing Yuan Shao, the former asked for He Jin's permission, so the eunuchs showed them He Jin's severed head.
However, the eunuchs became besieged when Yuan Shao attacked the Northern Palace and his brother Yuan Shu (d. 199 CE) attacked the Southern Palace, breaching the gate and forcing the eunuchs to flee to the Northern Palace by the covered passageway connecting both. Zhao Zhong was killed on the first day and the fighting lasted until September 25 when Yuan Shao finally broke into the Northern Palace and purportedly slaughtered two thousand eunuchs. However, Zhang Rang managed to flee with Emperor Shao and his brother Liu Xie to the Yellow River, where he was chased down by the Yuan family troops and committed suicide by jumping into the river and drowning.
Coalition against Dong Zhuo
Dong Zhuo (d. 192 CE), General of the Van (under Huangfu Song) who marched on to Luoyang under Yuan Shao's request, saw the capital in flames from a distance and heard that Emperor Shao was wandering in the hills nearby. When Dong approached Emperor Shao, the latter became frightened and unresponsive yet his brother Liu Xie explained to Dong what had happened. The ambitious Dong took over effective control of Luoyang and forced Yuan Shao to flee the capital on September 26. Dong was made Excellency of Works (司空), one of the Three Excellencies. Despite protests, Dong had Emperor Shao demoted as the Prince of Hongnong on September 28 while elevating his brother Liu Xie as emperor, later known as Emperor Xian of Han (r. 189–220 CE). Empress Dowager He was poisoned to death by Dong Zhuo on September 30, followed by the Prince of Hongnong on March 3, 190 CE.
Yuan Shao, once he left the capital, led a coalition of commanders, former officials, and soldiers of fortune to challenge Dong Zhuo. No longer viewing Luoyang as a safehaven, Dong burned the city to the ground and forced the imperial court to resettle at Chang'an in May 191 CE. In a conspiracy headed by the Minister over the Masses, Wang Yun (d. 192 CE), Dong was killed by his adopted son Lü Bu (d. 198 CE). Dong's subordinates then killed Wang and forced Lü to flee, throwing Chang'an into chaos.
Emperor Xian fled Chang'an in 195 CE and returned to Luoyang by August 196 CE. Meanwhile, the empire was being carved into eight spheres of influence, each ruled by powerful commanders or officials: in the northeast there was Yuan Shao and Cao Cao (155–220 CE); south of them was Yuan Shu, located just southeast of the capital; south of this was Liu Biao (d. 208 CE) in Jing; Sun Ce (d. 200 CE) controlled the southeast; in the southwest there was Liu Zhang (d. 219 CE) and Zhang Lu (d. 216 CE) located just north of him in Hanzhong; the southern Liang Province was inhabited by the Qiang people and various rebel groups. Although prognostication fueled speculation over the dynasty's fate, these warlords still claimed loyalty to Han, since the emperor was still at the pinnacle of a cosmic-religious system which ensured his political survival.
Rise of Cao Cao
Cao Cao, a Commandant of Cavalry during the Yellow Turban Rebellion and then Colonel in the Army of the Western Garden by 188 CE, was Governor of Yan Province (modern western Shandong and eastern Henan) in 196 CE when he took the emperor from Luoyang to his headquarters at Xuchang. Yuan Shu declared his own Zhong Dynasty (仲朝) in 197 CE, yet this bold move earned him the desertion of many of his followers, dying penniless in 199 CE after attempting to offer his title to Yuan Shao. Gaining more power after defeating Gongsun Zan (d. 199), Yuan Shao regretted not seizing the emperor when he had the chance and decided to act against Cao. The confrontation culminated in Cao Cao's victory at the Battle of Guandu in 200 CE, forcing Yuan to retreat to his territory. After Yuan Shao died in 202 CE, his sons fought over his inheritance, allowing Cao Cao to eliminate Yuan Tan (173–205 CE) and drive his brothers Yuan Shang and Yuan Xi to seek refuge with the Wuhuan people. Cao Cao asserted his dominance over the northeast when he defeated the Wuhuan led by Tadun at the Battle of White Wolf Mountain in 207 CE; the Yuan brothers fled to Gongsun Kang (d. 221 CE) in Liaodong, but the latter killed them and sent their heads to Cao Cao in submission.
When there was speculation that Liu Bei (161–223 CE), a scion of the imperial family who was formerly in the service of Cao Cao, was planning to take over the territory of the now ill Liu Biao in 208 CE, Cao Cao forced Liu Biao's son to surrender his father's land. Expecting Cao Cao to turn on him next, Sun Quan (182–252 CE), who inherited the territory of his brother Sun Ce in 200 CE, allied with Liu Bei and faced Cao Cao's naval force in 208 CE at the Battle of Chibi. This was a significant defeat for Cao Cao which ensured the continued disunity of China during the Three Kingdoms (220–265 CE).
Fall of the Han
When Cao Cao moved Emperor Xian to Xuchang in 196 CE, he took the title of Excellency of Works as Dong Zhuo had before him. In 208 CE, Cao abolished the three most senior offices, the Three Excellencies, and instead recreated two offices, the Imperial Counselor and Chancellor; he occupied the latter post. Cao was enfeoffed as the Duke of Wei in 213 CE, had Emperor Xian divorce Empress Fu Shou in 214 CE, and then had him marry his daughter as Empress Cao Jie in 215 CE. Finally, Cao took the title King of Wei in 216 CE, violating the rule that only Liu family members could become kings, yet he never deposed Emperor Xian. After Cao Cao died in 220 CE, his son Cao Pi (186–226 CE) inherited the title King of Wei and gained the uneasy allegiance of Sun Quan (while Liu Bei at this point had taken over Liu Zhang's territory of Yi Province). With debates over prognostication and signs from heaven showing the Han had lost the Mandate of Heaven, Emperor Xian agreed that the Han Dynasty had reached its end and abdicated to Cao Pi on December 11, 220 CE, thus creating the state of Cao Wei, soon to oppose Shu Han in 221 CE and Eastern Wu in 229 CE.
See also
- From the Shang to the Sui dynasties, Chinese rulers were referred to in later records by their posthumous names, while emperors of the Tang to Yuan dynasties were referred to by their temple names, and emperors of the Ming and Qing dynasties were referred to by single era names for their rule. See Endymion Porter Wilkinson's Chinese History (1998), p. 106–107.
- Ebrey (1999), 60.
- Ebrey (1999), 61.
- Cullen (2006), 1–2.
- Ebrey (1999), 63.
- Loewe (1986), 112–113.
- Loewe (1986), 112–113; Zizhi Tongjian, vol. 8.
- Loewe (1986), 113.
- Loewe (1986), 114.
- Zizhi Tongjian, vol. 8.
- Loewe (1986), 114–115; Loewe (2000), 254.
- Loewe (1986), 115.
- Loewe (2000), 255.
- Loewe (1986), 115; Davis (2001), 44.
- Loewe (1986), 116.
- Loewe (2000), 255; Loewe (1986), 117; Zizhi Tongjian, vol. 9.
- Davis (2001), 44; Loewe (1986), 116.
- Davis (2001), 44–45.
- Davis (2001), 44–45; Zizhi Tongjian, vol. 9.
- Davis (2001), 45; Zizhi Tongjian, vol. 9.
- Zizhi Tongjian, vol. 9.
- Davis (2001), 45.
- Davis (2001), 45–46.
- Davis (2001), 46.
- Loewe (1986), 122.
- Loewe (1986), 120.
- Hulsewé (1986), 526; Csikszentmihalyi (2006), 23–24; Hansen (2000), 110–112.
- Tom (1989), 112–113.
- Shi (2003), 63–65.
- Loewe (1986), 122–128.
- Hinsch (2002), 20.
- Loewe (1986), 126.
- Loewe (1986), 122–128; Zizhi Tongjian, vol. 15; Book of Han, vol. 13.
- Loewe (1986), 127–128.
- Di Cosmo (2002), 174–176; Torday (1997), 71–73.
- Di Cosmo (2001), 175–189.
- Torday (1997), 75–77.
- Di Cosmo (2002), 190–192; Torday (1997), 75–76.
- Di Cosmo (2002), 192; Torday (1997), 75–76
- Di Cosmo (2002), 192–193; Yü (1967), 9–10; Morton & Lewis (2005), 52
- Di Cosmo (2002), 193; Morton & Lewis (2005), 52.
- Yu (1986) 397; Book of Han, vol. 94a.
- Di Cosmo (2002), 193–195.
- Zizhi Tongjian, vol. 12.
- Di Cosmo (2002), 195–196; Torday (1997), 77; Yü (1967), 10–11.
- Loewe (1986), 130.
- Loewe (1986), 130–131; Wang (1982), 2.
- Loewe (1986), 130–131.
- Loewe (1986), 135.
- Loewe (1986), 135; Hansen (2000), 115–116.
- Zizhi Tongjian, vol. 13.
- Loewe (1986), 135–136; Hinsch (2002), 21.
- Loewe (1986), 136.
- Loewe (1986), 152.
- Torday (1997), 78.
- Loewe (1986), 136; Zizhi Tongjian, vol. 13.
- Loewe (1986), 136; Torday (1997), 78; Morton & Lewis (2005), 51–52; Zizhi Tongjian, vol. 13.
- Loewe (1986), 136–137.
- Hansen (2000), 117–119.
- Loewe (1986), 137–138.
- Loewe (1986), 149–150.
- Loewe (1986), 137–138; Loewe (1994), 128–129.
- Loewe (1994), 128–129.
- Csikszentmihalyi (2006), 25–27.
- Hansen (2000), 124–126; Loewe (1994), 128–129
- Loewe (1986), 139.
- Loewe (1986), 140–144.
- Loewe (1986), 141.
- Zizhi Tongjian, vol. 16.
- Loewe (1986), 141; Zizhi Tongjian, vol. 16.
- Loewe (1986), 141–142.
- Loewe (1986), 144.
- Ebrey (1999), 64.
- Torday (1997), 80–81.
- Torday (1997), 80–81; Yü (1986), 387–388; Di Cosmo (2002), 196–198.
- Di Cosmo (2002), 201–203.
- Torday (1997), 82–83; Yü (1986), 388–389.
- Di Cosmo (2002), 199–201 & 204–205; Torday (1997), 83–84.
- Yü (1986), 388–389.
- Yü (1986), 388–389; Di Cosmo (2002), 199–200.
- Kramers (1986), 752–753.
- Kramer (1986), 754–755.
- Kramers (1986), 753–754.
- Kramers (1986), 754.
- Kramers (1986), 754–756.
- Kramers (1986), 754–756; Morton & Lewis (2005), 53.
- Ebrey (1999), 77.
- Ebrey (1999), 77–78.
- Tom (1989), 99.
- Ebrey (1999), 80.
- Torday (1997), 91.
- Torday (1997), 83–84; Yü (1986), 389–390.
- Di Cosmo (2002), 211–214; Yü (1986) 389–390.
- Yü (1986) 389–390; Di Cosmo (2002), 214; Torday (1997), 91–92.
- Yü (1986), 390; Di Cosmo (2002), 237–239.
- Yü (1986), 390; Di Cosmo (2002), 240.
- Di Cosmo (2002), 232.
- Yü (1986), 391; Di Cosmo (2002), 241–242; Chang (2007), 5–6.
- Yü (1986), 391; Chang (2007), 8.
- Chang (2007), 23–33.
- Chang (2007), 53–56.
- Chang (2007), 6.
- Chang (2007), 173.
- Di Cosmo (2002), 241–244, 249–250.
- Morton & Lewis (2005), 56.
- An (2002), 83.
- Di Cosmo (2002), 247–249; Yü (1986), 407; Torday (1997), 104; Morton & Lewis (2005), 54–55.
- Torday (1997), 105–106.
- Torday (1997), 108–112.
- Torday (1997), 114–117.
- Ebrey (1999), 69.
- Torday (1997), 112–113.
- Ebrey (1999), 70.
- Di Cosmo (2002), 250–251.
- Yü (1986), 390–391.
- Chang (2007), 174; Yü (1986), 409–411.
- Yü (1986), 409–411.
- Torday (1997), 119–120.
- Yü (1986), 452.
- Yü (1986) 451–453.
- Ebrey (1999), 83.
- Yü (1986), 448.
- Yü (1986), 448–449.
- Pai (1992), 310–315.
- Hinsch (2002), 21–22; Wagner (2001), 1–2.
- Wagner (2001), 13–14.
- Wagner (2001), 13.
- Ebrey (1999), 75; Morton & Lewis (2005), 57.
- Wagner (2001), 13–17; Nishijima (1986), 576.
- Loewe (1986), 160–161.
- Loewe (1986), 160–161; Nishijima (1986), 581–582.
- Nishijima (1986), 586–588.
- Nishijima (1986), 588.
- Ebrey (1999), 66.
- Wang (1982), 100.
- Loewe (1986), 173–174.
- Loewe (1986), 175–177; Loewe (2000), 275.
- Zizhi Tongjian, vol. 22; Loewe (2000), 275; Loewe (1986), 178.
- Loewe (1986), 178.
- Huang (1988), 44; Loewe (1986), 180–182; Zizhi tongjian, vol. 23.
- Huang (1988), 45.
- Huang (1988), 44; Loewe (1986), 183–184.
- Loewe (1986), 183–184.
- Loewe (1986), 184.
- Huang (1988), 46; Loewe (1986), 185.
- Huang (1988), 46.
- Loewe (1986), 185–187.
- Loewe (1986), 187–197; Chang (2007), 175–176.
- Loewe (1986), 187–197.
- Loewe (1986), 187–206.
- Wagner (2001), 16–19.
- Loewe (1986), 196.
- Loewe (1986), 201.
- Loewe (1986), 201–202.
- Loewe (1986), 208.
- Loewe (1986), 208; Csikszentmihalyi (2006), xxv–xxvi
- Loewe (1986), 196–198; Yü (1986), 392–394.
- Yü (1986), 409.
- Yü (1986), 410–411.
- Loewe (1986), 197.
- Yü (1986), 410–411; Loewe (1986), 198.
- Yü (1986), 394; Morton & Lewis (2005), 55.
- Yü (1986), 395.
- Yü (1986), 395–396; Loewe (1986), 196–197.
- Yü (1986), 396–397.
- Yü (1986), 396–398; Loewe (1986), 211–213; Zizhi Tongjian, vol. 29.
- Yü (1986) 396–398; Loewe (1986), 211–213
- Yü (1986), 398.
- Bielenstein (1986), 225–226; Huang (1988), 46–48.
- Bielenstein (1986), 225–226; Loewe (1986), 213.
- Bielenstein (1986), 225–226.
- Bielenstein (1986), 227; Zizhi Tongjian, vol. 33; Zizhi Tongjian, vol. 34.
- Bielenstein (1986), 227–228.
- Bielenstein (1986), 228–229.
- Bielenstein (1986), 229–230.
- Bielenstein (1986), 230–231; Hinsch (2002), 23–24.
- Bielenstein (1986), 230–231; Hinsch (2002), 23 24; Ebrey (1999), 66.
- Hansen (2000), 134; Lewis (2007), 23.
- Hansen (2000), 134; Bielenstein (1986), 232; Lewis (2007), 23.
- Lewis (2007), 23; Bielenstein (1986), 234; Morton & Lewis (2005), 58.
- Bielenstein (1986), 232–233.
- Bielenstein (1986), 232–233; Morton & Lewis (2005), 57.
- Bielenstein (1986), 233.
- Bielenstein (1986), 234; Hinsch (2002), 24.
- Bielenstein (1986), 236.
- Bielenstein (1986), 237.
- Bielenstein (1986), 238.
- Bielenstein (1986), 238–239; Yü (1986), 450.
- Yü (1986), 450.
- Hansen (2000), 135; Bielenstein (1986), 241–242; de Crespigny (2007), 196.
- Hansen (2000), 135; Bielenstein (1986), 241–242"
- Hansen (2000), 135; de Crespigny (2007), 196; Bielenstein (1986), 243–244.
- de Crespigny (2007), 196; Bielenstein (1986), 243–244
- Bielenstein (1986), 246; de Crespigny (2007), 558; Zizhi Tongjian, vol. 38.
- de Crespigny (2007), 558–559; Bielenstein (1986), 247.
- de Crespigny (2007), 558–559.
- Bielenstein (1986), 248; de Crespigny (2007), 568.
- Bielenstein (1986), 248–249; de Crespigny (2007), 197.
- de Crespigny (2007), 197, 560, & 569; Bielenstein (1986), 249–250.
- de Crespigny (2007), 559–560.
- de Crespigny (2007), 560; Bielenstein (1986), 251.
- de Crespigny (2007), 197–198 & 560; Bielenstein (1986), 251–254.
- de Crespigny (2007), 560–561; Bielenstein (1986), 254.
- Bielenstein (1986), 254; Crespigny (2007), 561.
- Bielenstein (1986), 254; de Crespigny (2007), 269 & 561.
- Bielenstein (1986), 255.
- de Crespigny (2007), 54–55.
- Bielenstein (1986), 255; de Crespigny (2007), 270.
- Hinsch (2002), 24–25; Cullen (2006), 1.
- Wang (1982), 29–30; Bielenstein (1986), 262.
- Wang (1982), 30–33.
- Hansen (2000), 135–136.
- Ebrey (1999), 73.
- Nishijima (1986), 595–596.
- Ebrey (1999), 82.
- Wang (1982), 55–56.
- Ebrey (1986), 609.
- de Crespigny (2007), 564–565.
- Ebrey (1986), 613.
- Bielenstein (1986), 256.
- de Crespigny (2007), 605.
- de Crespigny (2007), 606.
- Bielenstein (1986), 243.
- de Crespigny (2007), 608–609.
- de Crespigny (2007), 496.
- de Crespigny (2007), 498.
- de Crespigny (2007), 498; Deng (2005), 67.
- de Crespigny (2007), 591.
- de Crespigny (2007), 591; Hansen (2000), 137–138.
- Hansen (2000), 137–138.
- de Crespigny (2007), 592.
- de Crespigny (2007), 562 & 660; Yü (1986), 454.
- Yü (1986), 399–400.
- Yü (1986), 401.
- Yü (1986), 403.
- Torday (1997), 390–391.
- Yü (1986), 413–414.
- Yü (1986), 404.
- Yü (1986), 414–415.
- de Crespigny (2007), 73.
- Yü (1986), 415 & 420.
- Yü (1986), 415; de Crespigny (2007), 171.
- Yü (1986), 415.
- de Crespigny (2007), 5.
- de Crespigny (2007), 6; Torday (1997), 393.
- Yü (1986), 415–416.
- de Crespigny (2007), 497 & 590.
- Yü (1986), 460–461; de Crespigny (2007), 239–240.
- Wood (2002), 46–47; Morton & Lewis (2005), 59.
- Yü (1986), 450–451.
- Demiéville (1986), 821–822.
- Demiéville (1986), 823.
- Demieville (1986), 823; Akira (1998), 247–248.
- Beilenstein (1986), 278; Zizhi Tongjian, vol. 40; Zizhi Tongjian, vol. 43.
- Bielenstein (1986), 257–258; de Crespigny (2007), 607–608.
- de Crespigny (2007), 499.
- Hansen (2000), 136.
- de Crespigny (2007), 499 & 588–589.
- Bielenstein (1986), 280–281.
- de Crespigny (2007), 589; Bielenstein (1986), 282–283.
- de Crespigny (2007), 531; Bielenstein (1986), 283.
- Bielenstein (1986), 283; de Crespigny (2007), 122–123; Zizhi Tongjian, vol. 49.
- de Crespigny (2007), 122–123; Bielenstein (1986), 283–284.
- Bielenstein (1986), 284; de Crespigny (2007), 128 & 580.
- Bielenstein (1986), 284–285; de Crespigny (2007), 582–583.
- Bielenstein (1986), 284–285; de Crespigny (2007), 473–474.
- Bielenstein (1986), 285; de Crespigny (2007), 477–478, 595–596.
- Bielenstein (1986) 285; de Crespigny (2007), 477–478, 595–596; Zizhi Tongjian, vol. 53.
- Bielenstein (1986), 285–286; de Crespigny (1986), 597–598.
- de Crespigny (2007), 510; Beck (1986), 317–318.
- Loewe (1994), 38–52.
- de Crespigny (2007), 126.
- de Crespigny (2007), 126–127.
- de Crespigny (2007), 581–582.
- de Crespigny (2007), 475.
- de Crespigny (2007), 474–475 & 1049–1051; Minford & Lau (2002), 307; Needham (1965), 30, 484, 632, 627–630.
- de Crespigny (2007), 477.
- de Crespigny (2007), 475; Bielenstein (1986), 287–288.
- de Crespigny (2007), 596–597.
- de Crespigny (2007), 596.
- de Crespigny (2007), 597.
- Hansen (2000), 141.
- de Crespigny (2007), 597, 601–602.
- de Crespigny (2007), 599.
- de Crespigny (2007), 601–602; Hansen (2000), 141–142.
- de Crespigny (2007), 513–514.
- Yü (1986), 421; Chang (2007), 22.
- Yü (1986), 421.
- de Crespigny (2007), 123.
- Yü (1986), 422 & 425–426.
- Zizhi Tongjian, vol. 49; Book of Later Han, vol. 47.
- Yü (1986), 425–426.
- de Crespigny (2007), 123–124; Zizhi Tongjian, vol. 49; Book of Later Han, vol. 47, vol. 87 ; see also Yü (1986), 429–430.
- de Crespigny (2007) 123–124.
- de Crespigny (2007), 123–124; Yü (1986), 430–432.
- Yü (1986), 432.
- Yü (1986), 433–435.
- Yü (1986), 416–417 & 420.
- Yü (1986), 405 & 443–444.
- Yü (1986), 443–444.
- Yü (1986), 444–445.
- Yü (1986), 445–446.
- Demieville (1986), 823; Akira (1998), 248; Zhang (2002), 75.
- Akira (1998), 248 & 251.
- Demieville (1986), 825–826.
- de Crespigny (2007), 600; Yü (1986), 460–461.
- de Crespigny (2007), 600.
- de Crespigny (2007), 513; Barbieri-Low (2007), 207; Huang (1988), 57.
- de Crespigny (2007), 602.
- Beck (1986), 319–320.
- Beck (1986), 320–321.
- Beck (1986), 321–322.
- Beck (1986), 322.
- Beck (1986), 322; Zizhi Tongjian, vol. 56.
- de Crespigny (2007), 511.
- Beck (1986), 323; Hinsch (2002), 25–26.
- Hansen (2000), 144–145.
- Hendrischke (2000), 140–141.
- Hansen (2000), 145–146.
- Hansen (2000), 145–146; de Crespigny (2007), 514–515; Beck (1986), 339–340.
- de Crespigny (2007), 515.
- Ebrey (1999), 84.
- Beck (1986), 339; Huang (1988), 59–60.
- Beck (1986), 341–342.
- Beck (1986), 343.
- Beck (1986), 344.
- Beck (1986), 344; Zizhi Tongjian, vol. 59.
- Beck (1986), 345.
- Beck (1986), 345; Hansen (2000), 147; Morton & Lewis (2005), 62.
- Beck (1986), 345–346.
- Beck (1986), 346.
- Beck (1986), 346–347.
- Beck (1986), 347.
- Beck (1986), 347–349.
- de Crespigny (2007), 158.
- Zizhi Tongjian, vol. 60.
- Beck (1986), 349.
- Beck (1986), 350–351.
- de Crespigny (2007), 35–36.
- de Crespigny (2007), 36.
- Beck (1986), 351.
- Zizhi Tongjian, vol. 63.
- de Crespigny (2007), 37.
- de Crespigny (2007), 37; Beck (1986), 352.
- Beck (1986), 352.
- Beck (1986), 353–354.
- Beck (1986), 352–353.
- Beck (1986), 354–355.
- Beck (1986), 355–366.
- Beck (1986), 356–357; Hinsch (2002), 26.
- Akira, Hirakawa. (1998). A History of Indian Buddhism: From Sakyamani to Early Mahayana. Translated by Paul Groner. New Delhi: Jainendra Prakash Jain At Shri Jainendra Press. ISBN 81-208-0955-6.
- An, Jiayao. (2002). "When Glass Was Treasured in China," in Silk Road Studies VII: Nomads, Traders, and Holy Men Along China's Silk Road, 79–94. Edited by Annette L. Juliano and Judith A. Lerner. Turnhout: Brepols Publishers. ISBN 2-503-52178-9.
- Beck, Mansvelt. (1986). "The Fall of Han," in The Cambridge History of China: Volume I: the Ch'in and Han Empires, 221 B.C. – A.D. 220, 317-376. Edited by Denis Twitchett and Michael Loewe. Cambridge: Cambridge University Press. ISBN 0-521-24327-0.
- Barbieri-Low, Anthony J. (2007). Artisans in Early Imperial China. Seattle & London: University of Washington Press. ISBN 0-295-98713-8.
- Bielenstein, Hans. (1986). "Wang Mang, the Restoration of the Han Dynasty, and Later Han," in The Cambridge History of China: Volume I: the Ch'in and Han Empires, 221 B.C. – A.D. 220, 223–290. Edited by Denis Twitchett and Michael Loewe. Cambridge: Cambridge University Press. ISBN 0-521-24327-0.
- Chang, Chun-shu. (2007). The Rise of the Chinese Empire: Volume II; Frontier, Immigration, & Empire in Han China, 130 B.C. – A.D. 157. Ann Arbor: University of Michigan Press. ISBN 0-472-11534-0.
- Csikszentmihalyi, Mark. (2006). Readings in Han Chinese Thought. Indianapolis and Cambridge: Hackett Publishing Company, Inc. ISBN 0-87220-710-2.
- Cullen, Christoper. (2006). Astronomy and Mathematics in Ancient China: The Zhou Bi Suan Jing. Cambridge: Cambridge University Press. ISBN 0-521-03537-6.
- Davis, Paul K. (2001). 100 Decisive Battles: From Ancient Times to the Present. New York: Oxford University Press. ISBN 0-19-514366-3.
- de Crespigny, Rafe. (2007). A Biographical Dictionary of Later Han to the Three Kingdoms (23-220 AD). Leiden: Koninklijke Brill. ISBN 90-04-15605-4.
- Demiéville, Paul. (1986). "Philosophy and religion from Han to Sui," in Cambridge History of China: Volume I: the Ch'in and Han Empires, 221 B.C. – A.D. 220, 808–872. Edited by Denis Twitchett and Michael Loewe. Cambridge: Cambridge University Press. ISBN 0-521-24327-0.
- Deng, Yingke. (2005). Ancient Chinese Inventions. Translated by Wang Pingxing. Beijing: China Intercontinental Press (五洲传播出版社). ISBN 7-5085-0837-8.
- Di Cosmo, Nicola. (2002). Ancient China and Its Enemies: The Rise of Nomadic Power in East Asian History. Cambridge: Cambridge University Press. ISBN 0-521-77064-5.
- Ebrey, Patricia. (1986). "The Economic and Social History of Later Han," in Cambridge History of China: Volume I: the Ch'in and Han Empires, 221 B.C. – A.D. 220, 608-648. Edited by Denis Twitchett and Michael Loewe. Cambridge: Cambridge University Press. ISBN 0-521-24327-0.
- Ebrey, Patricia (1999). The Cambridge Illustrated History of China. Cambridge: Cambridge University Press. ISBN 0-521-66991-X.
- Hansen, Valerie. (2000). The Open Empire: A History of China to 1600. New York & London: W.W. Norton & Company. ISBN 0-393-97374-3.
- Hendrischke, Barbara. (2000). "Early Daoist Movements" in Daoism Handbook, ed. Livia Kohn, 134-164. Leiden: Brill. ISBN 90-04-11208-1
- Hinsch, Bret. (2002). Women in Imperial China. Lanham: Rowman & Littlefield Publishers, Inc. ISBN 0-7425-1872-8.
- Huang, Ray. (1988). China: A Macro History. Armonk & London: M.E. Sharpe Inc., an East Gate Book. ISBN 0-87332-452-8.
- Hulsewé, A.F.P. (1986). "Ch'in and Han law," in The Cambridge History of China: Volume I: the Ch'in and Han Empires, 221 B.C. – A.D. 220, 520-544. Edited by Denis Twitchett and Michael Loewe. Cambridge: Cambridge University Press. ISBN 0-521-24327-0.
- Kramers, Robert P. (1986). "The Development of the Confucian Schools," in Cambridge History of China: Volume I: the Ch'in and Han Empires, 221 B.C. – A.D. 220, 747–756. Edited by Denis Twitchett and Michael Loewe. Cambridge: Cambridge University Press. ISBN 0-521-24327-0.
- Lewis, Mark Edward. (2007). The Early Chinese Empires: Qin and Han. Cambridge: Harvard University Press. ISBN 0-674-02477-X.
- Loewe, Michael. (1986). "The Former Han Dynasty," in The Cambridge History of China: Volume I: the Ch'in and Han Empires, 221 B.C. – A.D. 220, 103–222. Edited by Denis Twitchett and Michael Loewe. Cambridge: Cambridge University Press. ISBN 0-521-24327-0.
- Loewe, Michael. (1994). Divination, Mythology and Monarchy in Han China. Cambridge, New York, and Melbourne: Cambridge University Press. ISBN 0-521-45466-2.
- Loewe, Michael. (2000). A Biographical Dictionary of the Qin, Former Han, and Xin Periods (221 BC - AD 24). Leiden, Boston, Koln: Koninklijke Brill NV. ISBN 90-04-10364-3.
- Minford, John and Joseph S.M. Lau. (2002). Classical Chinese literature: an anthology of translations. New York: Columbia University Press. ISBN 0-231-09676-3.
- Morton, William Scott and Charlton M. Lewis. (2005). China: Its History and Culture: Fourth Edition. New York City: McGraw-Hill. ISBN 0-07-141279-4.
- Needham, Joseph (1965). Science and Civilization in China: Volume 4, Physics and Physical Technology, Part II: Mechanical Engineering. Cambridge: Cambridge University Press. Reprint from Taipei: Caves Books, 1986. ISBN 0-521-05803-1.
- Nishijima, Sadao. (1986). "The Economic and Social History of Former Han," in Cambridge History of China: Volume I: the Ch'in and Han Empires, 221 B.C. – A.D. 220, 545-607. Edited by Denis Twitchett and Michael Loewe. Cambridge: Cambridge University Press. ISBN 0-521-24327-0.
- Pai, Hyung Il. "Culture Contact and Culture Change: The Korean Peninsula and Its Relations with the Han Dynasty Commandery of Lelang," World Archaeology, Vol. 23, No. 3, Archaeology of Empires (Feb., 1992): 306-319.
- Shi, Rongzhuan. "The Unearthed Burial Jade in the Tombs of Han Dynasty's King and Marquis and the Study of Jade Burial System", Cultural Relics of Central China, No. 5 (2003): 62–72. ISSN 1003-1731.
- Tom, K.S. (1989). Echoes from Old China: Life, Legends, and Lore of the Middle Kingdom. Honolulu: The Hawaii Chinese History Center of the University of Hawaii Press. ISBN 0-8248-1285-9.
- Torday, Laszlo. (1997). Mounted Archers: The Beginnings of Central Asian History. Durham: The Durham Academic Press. ISBN 1-900838-03-6.
- Wagner, Donald B. (2001). The State and the Iron Industry in Han China. Copenhagen: Nordic Institute of Asian Studies Publishing. ISBN 87-87062-83-6.
- Wang, Zhongshu. (1982). Han Civilization. Translated by K.C. Chang and Collaborators. New Haven and London: Yale University Press. ISBN 0-300-02723-0.
- Wilkinson, Endymion Porter. (1998). Chinese History: A Manual. Cambridge and London: Harvard University Asia Center of the Harvard University Press. ISBN 0-674-12337-8.
- Wood, Frances. (2002). The Silk Road: Two Thousand Years in the Heart of Asia. Berkeley and Los Angeles: University of California Press. ISBN 0-520-24340-4.
- Yü, Ying-shih. (1967). Trade and Expansion in Han China: A Study in the Structure of Sino-Barbarian Economic Relations. Berkeley: University of California Press.
- Yü, Ying-shih. (1986). "Han Foreign Relations," in The Cambridge History of China: Volume I: the Ch'in and Han Empires, 221 B.C. – A.D. 220, 377-462. Edited by Denis Twitchett and Michael Loewe. Cambridge: Cambridge University Press. ISBN 0-521-24327-0.
- Zhang, Guanuda. (2002). "The Role of the Sogdians as Translators of Buddhist Texts," in Silk Road Studies VII: Nomads, Traders, and Holy Men Along China's Silk Road, 75–78. Edited by Annette L. Juliano and Judith A. Lerner. Turnhout: Brepols Publishers. ISBN 2-503-52178-9.
Further reading
- Dubs, Homer H. (trans.) The History of the Former Han Dynasty. 3 vols. Baltimore: Waverly Press, 1938-
- Hill, John E. (2009) Through the Jade Gate to Rome: A Study of the Silk Routes during the Later Han Dynasty, 1st to 2nd Centuries CE. John E. Hill. BookSurge, Charleston, South Carolina. ISBN 978-1-4392-2134-1.
- Media related to Han Dynasty at Wikimedia Commons
A portion of the proceeds from advertising on Digplanet goes to supporting Wikipedia. | http://www.digplanet.com/wiki/History_of_the_Han_Dynasty | 13 |
70 | I’ve been reading a lot about inflation lately and wanted to write a quick post about what inflation really is. How the government reports inflation is grossly inaccurate, so I attempt to provide a clear example of how inflation is a cause of higher prices and not the other way around.
In order to understand the effect that inflation has on prices, we need to understand three relationships:
- The relationship between an item and its value
- The relationship between supply and demand and prices
- The relationship between dollars and prices
Once these three relationships are understood the overall effect of inflation on prices will be clear.
What’s the Real Value of Something?
This is a question that is almost always left out of a discussion of inflation and prices, but it is essential to understand that there is a difference between value and price.
If there was a world without money, the value of an orange would be expressed in terms of how great it tastes, or the nutritional benefits of the vitamins it provides. In the same vein, the value of a gallon of gas would be measured in the number of miles that can be driven in your car when that gallon is poured into the tank. If that gallon of gas allows you to get to your job for a couple days, then the value would be quite high.
While the value of an orange or a gallon of gas could be different to every individual person, it is important to see that value is not based on, or even related to money. If there was no money, things still have value.
Supply and Demand and Prices
Before we get to the relationship of inflation on prices, we need to understand supply and demand.
From our example above, if there is a frost and the orange crop is damaged, the price of oranges will go up. It’s important to note that this raise in price is not inflation. It is a common misunderstanding that inflation is a result of rising prices, but this is false. Inflation can cause prices to rise, but so can many other factors — the most notable of which is supply and demand.
The way governments measure and report inflation is actually flawed for this reason. If the Consumer Price Index (CPI) goes up, as a result of the price of oranges or other products going up, then the government reports inflation, but it would be more true to say prices are increasing, and it may or may not be caused inflation.
Dollars and Prices
Finally we can look at the relationship between the cost of an item and number of dollars available in the economy. In order to create a manageable example, we’ll take our orange and imagine eating it on an island country where the sum total of all money available is only 100 dollars.
If we go back to the “value” of an orange in this economy we could express that value (which is based on the benefit derived from eating the orange) as 1/100th of the total value of all the goods available on the island. Obviously this is just an arbitrary figure, but in reality, since our money is finite, the principle is sound… everything has a price which is a reflection of its value, that can be expressed as a fraction of all the money available to spend on any possible item for sale.
Now let’s double the total amount of money available on the island, due to the island government creating a stimulus package.
Now there is a total of 200 dollars available on the island, but the value of an orange has not changed. It still has the same great taste and health benefits. Therefore still has a value, when expressed as a percentage of all available money or 1/100th.
But now there are 200 dollars so 1/100th of 200 dollars is $2. The price of an orange, due to the government stimulus package which doubled the amount of money available, has also doubled from $1 to $2.
This is the essence of inflation as a result of money supply. This should help us understand what MUST result from the government’s recent bailout package.
Money. We work for it. We buy things with it. We need it for retirement. But what is it, anyway? And what gives our money value?
When you take a second to think about it, it’s amazing that people don’t ask these questions of themselves more often. After all, the saying “money makes the world go ’round” is true — but why? Why do we work forty hours a week (or more) for these pieces of paper? And why are merchants willing to trade us real goods for them?
Gold and Silver
There was a time when a “dollar” was simply a term for a set weight in gold. Through the start of World War I, you could take your dollars to the U.S. government and convert them into gold at a rate of $20.67 per ounce. Redemptions were temporarily suspended in 1914, but later resumed. Then in 1934, the value of the dollar was changed so that one ounce of gold was worth $35. Although citizens could no longer redeem their dollars for gold, foreign governments could, all the way up until 1971.
The U.S. dollar used to also be convertible into silver. As late as 1968, dollar-bills were “silver certificates,” convertible into silver by the government. The last silver certificates were issued in 1957.
But since 1971, the U.S. dollar has been convertible into absolutely nothing. Why then do people still work for them? The answer is legal tender laws. If you look at one of your dollars, you will notice it says, “This note is legal tender for all debts, public and private.” This means that people and business have to accept U.S. dollars — by law — for any debts. And of course, the U.S. government has to accept them for taxes.
But this isn’t good enough for a lot of people. They think that the “closing of the gold window” in 1971 put a death sentence on the U.S. dollar. Without gold or silver backing, there is little to stop the government from printing more and more paper money, and if adequate goods and services are not produced to equal the expanded money supply, then there is inflation. How much inflation have we had since going off the gold standard? Well, according to the Bureau of Labor Statistics Inflation Calculator, it would take $514.45 in 2007 to equal the purchasing power of $100 in 1971.
What Does This Mean to Investors and Consumers?
There are plenty of financial analysts slightly outside of the mainstream who have been preaching the coming “Financial Armageddon” for decades. So far, they’ve been wrong, but perhaps they will be right in the end. Regardless, it is probably a smart idea to diversify out of U.S. dollars so that you’re not vulnerable to inflation or a potential collapse of the dollar.
One way to do so is to convert your U.S. dollars into gold. No, the government no longer performs conversions for you, but you can buy gold in the open market. In fact, with gold-based exchange-traded funds (ETFs), it’s never been easier.
But gold isn’t the only investment that helps diversify out of U.S. dollars. You can convert your dollars to foreign currencies, invest in stocks (which have their own inflation-protection measures) — especially foreign stocks, or buy real estate. The collapse of the U.S. dollar is probably not something that should keep you up at night, but converting your dollars into real assets is probably a wise move, regardless. After all, your dollars themselves are worthless — it’s only what you can trade them for that gives them value.
I’ve been reading through my new copy of The Single Best Investment and right there on page one the author Lowell Miller, slapped me in the face with a very important reminder.
An often overlooked, or do I dare to say purposely neglected by most conservative personal finance writers and investment advisers is inflation.
Let’s take a look at some facts about inflation from the book.
The average annual inflation rate for the past 60 years is: 4.10%
Since 1945, there have only been 2 years when inflation has been negative.
What this means for your portfolio, and probably why most investment sellers don’t talk to much about inflation is, you start off, on average 4.10% in the whole each year! That’s before you even plunk your dough into the latest under-performing mutual fund, ohand don’t forget your 2.50% MER.
How’s that 7% annual return that the fund company is paying splash all over the sports page of your newspaper looking now?
Here’s what inflation looks like in real life…you’ll see what compounding looks like in actual terms.
A middle of the line Ford car, in 1980, cost $3,500. Today, approximately 26 years later,the same vehicle would cost you $20,000. This represents a period of higher than average inflation, but even on average, at only 4% inflation, prices will double every 18 years. That is without any other influence.
Since 1945 the Consumer Price Index reports that prices have risen over 900%.
Inflation and Your Portfolio, In Real Terms
What this really means is that if you invested $3,500 in 1980, if that investment is worth $20,000 now, pat yourself on the back…you broke even!
What I liked about the author putting this into his investing book was to encourage the average retail investor, his audience, to be more honest about the context of their investments.
It is easy, just as we investors tend to talk up the winners and quietly neglect our losses, to ignore the silent force of inflation when calculating our returns or even more importantly goals such as funds needed for children’s college accounts and our retirements.
The next time you’re evaluating a fixed income investment don’t forget to take inflation into the equation…if you do you’ll see that these types of holdings, which are often sold as “low risk” are actually very risky…since when evaluated within the context of the virtually ever present monetary force of inflation that we exist with, they will almost certainly lose you money. | http://www.smartmoneydaily.com/tag/inflation | 13 |
22 | Human Rights Promotion & Protection: Definitions & Conceptual Issues
The first section addresses the definition of human rights as well as of the different categories and generations of rights concerned. It also provides references to the main international and regional instruments referring to the different categories of human rights, as well as definitions for different terms and expressions used in reference to that topic. It also presents a brief history of human rights.
Human rights, following the manifest literal sense of the term, are ordinarily understood to be the rights that one has simply because one is human. As such, they are equal rights, because we either are or are not human beings, equally. Human rights are also inalienable rights, because being or not being human usually is seen as an inalterable fact of nature, not something that is either earned or can be lost. Human rights are thus "universal" rights in the sense that they are held "universally" by all human beings.
Source: Donnelly, Jack. The Relative Universality of Human Rights. Human Rights Quarterly 29 (2007): 282-283
Human rights have a series of key characteristics that distinguish them from other rights:1
A Conventional Definition of Human Rights
Human rights are commonly understood as being those rights which are inherent to the human being. The concept of human rights acknowledges that every single human being is entitled to enjoy his or her human rights without distinction as to race, colour, sex, language, religion, political or other opinion, national or social origin, property, birth or other status.
The following are some of the most important characteristics of human rights:
[Back to Top]
"Fundamental," "basic," "elementary," "essential," and "core" rightsThe notion of "fundamental rights" usually refers to such rights as the right to life and the inviolability of the person. The need to distinguish these rights stems from the concern that a broad definition of human rights may lead to the notion of "violation of human rights" losing some of its significance. The terms "elementary," "essential," "core" and "fundamental" human rights are also used.
Another approach is to distinguish a number of "basic rights," which should be given absolute priority in national and international policy. These include all the rights which concern people's primary material and non-material needs. If these are not provided, no human being can lead a dignified existence. Basic rights include the right to life, the right to a minimum level of security, the inviolability of the person, freedom from slavery and servitude, and freedom from torture, unlawful deprivation of liberty, discrimination and other acts which impinge on human dignity. They also include freedom of thought, conscience and religion, as well as the right to suitable nutrition, clothing, shelter and medical care, and other essentials crucial to physical and mental health.
"Classic" vs. "social" rightsOriginally, the differentiation of "classic" rights from "social" rights was meant to reflect the nature of the obligations under each set of rights. "Classic rights" were seen as requiring "the non-intervention of the state (negative obligation), and 'social rights' as requiring active intervention on the part of the state (positive obligations). In other words, classic rights entail an obligation for the state to refrain from certain actions, while social rights oblige it to provide certain guarantees."6 Two key principles are the progressive realization of economic, social, and cultural rights, and the use by the state of the maximum of its available resources. The Committee on Economic and Social Rights (CESR) has also articulated the absolute minimum that the state, however poor, must provide.7The Cold War ideological context contributed to polarize the debate as "classic" rights (civil and political rights) were perceived as championed by the West and "social" rights by the East. The evolution of international relations and international law, however, has made the instrumentalization of this distinction anachronistic, even though a similar debate has arisen with respect to cultural relativism of human rights. 8
Go to debate Human rights: universalism versus cultural relativism
"Individual" vs. "collective" rightsThis distinction has captured a lot of attention in some international debates along the second half of the twentieth century. It originally refers to the fact that some of the individual rights are exercised by people in groups. Freedom of association and assembly, freedom of religion and, more especially, the freedom to form or join a trade union, fall into this category. The collective element is even more evident when human rights are linked specifically to membership in a certain group, such as the right of members of ethnic and cultural minorities to preserve their own language and culture. "One must make a distinction between two types of rights, which are usually called collective rights: individual rights enjoyed in association with others, and the rights of a collective. The most notable example of a collective human right is the right to self-determination, which is regarded as being vested in peoples rather than in individuals.9 The recognition of the right to self-determination as a human right is grounded in the fact that it is seen as a necessary precondition for the development of the individual."10 It is generally accepted that collective rights may not infringe upon universally accepted individual rights, such as the right to life and freedom from torture. But this distinction has been at the core of debates surrounding the universality of human rights. Go to debate Human rights: universalism versus cultural relativism
Key references on civil and political rights
Universal Declaration of Human Rights
General Assembly Resolution 217 A (III) of December 1948
The International Covenant on Civil and Political Rights (ICCPR)
The ICCPR was adopted in 1966. It addresses the State's traditional responsibilities for administering justice and maintaining the rule of law. Many of the provisions in the Covenant address the relationship between the individual and the State. In discharging these responsibilities, States must ensure that human rights are respected, not only those of the victim but also those of the accused.
The Covenant has two Optional Protocols. The first Optional Protocol to the ICCPR establishes the procedure for dealing with communications (or complaints) from individuals claiming to be victims of violations of any of the rights set out in the Covenant. The Second Optional Protocol to the ICCPR envisages the abolition of the death penalty.
Unlike the Universal Declaration and the Covenant on Economic, Social and Cultural Rights, the Covenant on Civil and Political Rights authorizes a State to derogate from--in other words, restrict--the enjoyment of certain rights in times of an official public emergency which threatens the life of a nation. Such limitations are permitted only to the extent strictly required under the circumstances and must be reported to the United Nations. Even so, some provisions such as the right to life and freedom from torture and slavery may never be suspended.
The Human Rights Committee is the treaty body composed of independent experts that monitors implementation of and compliance with the ICCPR by States parties.
First, second and third "generation" of human rightsScholars and practitioners alike commonly refer to the division of human rights into three generations. This categorization was first proposed by Karel Vasak at the International Institute of Human Rights in Strasbourg. His division follows the principles of "Liberté, Égalité and Fraternité" of the French Revolution:11
First generation: Civil and political rights13
The term "civil right" is often used with reference to the rights set out in the first eighteen articles of the Universal Declaration on Human Rights (UDHR), almost all of which are also set out as binding treaty norms in the International Covenant on Civil and Political Rights (ICCPR). From this group, a further set of "physical integrity rights" has been identified, which concern the right to life, liberty and security of the person, and which offer protection from physical violence against the person, torture and other cruel and degrading treatment or punishment, arbitrary arrest, detention, exile, slavery and servitude, right to fair and prompt trial, interference with ones privacy and right of ownership, restriction of one's freedom of movement, and the freedom of thought, conscience and religion. There are also other provisions which protect members of ethnic, religious or linguistic minorities. Under Article 2, all States Parties undertake to respect and take the necessary steps to ensure the rights recognized in the Covenant without distinction of any kind, such as race, colour, sex, language, religion, political or other opinion, national or social origin, property, birth or other status.
In general, "political rights" are those set out in Articles 19 to 21 of the UDHR and also codified in the ICCPR (Articles 18, 19, 21, 22 and 25). They include freedom of expression, freedom of association and assembly, the right to take part in the government of one's country, and the right to vote and stand for election at genuine periodic elections held by secret ballot.
Key references on economic, social and cultural rights
Universal Declaration of Human Rights
General Assembly Resolution 217 A (III) of December 1948
The International Covenant on Economic, Social and Cultural Rights (ICESCR)
After 20 years of drafting debates, the ICESCR was adopted by the General Assembly in 1966 and entered into force in January 1976.
The Covenant embodies some of the most significant international legal provisions establishing economic, social and cultural rights, including, inter alia, rights relating to work in just and favorable conditions; to social protection; to an adequate standard of living including clothing, food and housing; to the highest attainable standards of physical and mental health; to education and to the enjoyment of the benefits of cultural freedom and scientific progress.
The Covenant also articulates the principle of progressive realization of economic, social, and cultural rights. Significantly, article 2 outlines the legal obligations which are incumbent upon States parties under the Covenant. States are required to take positive steps to implement these rights, to the maximum of their resources, in order to achieve the progressive realization of the rights recognized in the Covenant, particularly through the adoption of domestic legislation, The Committee on Economic and Social Rights (CESR) has also articulated the absolute minimum that the state, however poor, must provide.
The Committee on Economic, Social and Cultural Rights (CESR) is the treaty body composed of independent experts that monitors implementation of and compliance with the ICESCR by States parties.
Second generation: Economic, social and cultural rights14
In many respects, greater international attention has been given to the promotion and protection of civil and political rights than to the promotion and protection of social, economic and culturalrights, leading to the erroneous belief that violations of economic, social and cultural rights were not subject to the same degree of legal scrutiny and measures of redress. This view neglected the underlying principles of human rights"that rights are indivisible and interdependent, and therefore the violation of one right may lead to the violation of another. Economic, social and cultural rights are fully recognized by the international community and in international law, and are progressively gaining attention. These rights are designed to ensure the protection of people, based on the expectation that people can enjoy rights, freedoms and social justice simultaneously.
The economic and social rights are listed in Articles 22 to 26 of the UDHR, and further developed and set out as binding treaty norms in the International Covenant on Economic, Social and Cultural Rights (ICESCR). These rights provide the conditions necessary for a life of basic dignity, as well as prosperity and well-being. Economic rights refer, for example, to the right to work, which one freely chooses or accepts, the right to a fair wage, a reasonable limitation of working hours, and trade union rights. Social rights are those rights necessary for an adequate standard of living, including rights to health, shelter, food, social care, and the right to education (Articles 6 to 14 of the ICESCR). Cultural rights are listed in Articles 27 and 28 of UDHR: the right to participate freely in the cultural life of the community, to share in scientific advancement, and the right to the protection of the moral and material interests resulting from any scientific, literary or artistic production of which one is the author (see also Article 15 of the ICESCR and Article 27 of the ICCPR).
Third generation: group and collective rights
Apart from the right to self-determination, the main third generation right that so far has been given an official human rights status is the right to development.15 The Vienna Declaration confirms the right to development as a collective as well as an individual right, with the individual being regarded as the primary subject of development. Recently, the right to development has been given considerable attention in the activities of the High Commissioner for Human Rights. The EU and its member states also explicitly accept the right to development as part of the human rights concept. Many linguistic and cultural rights (such as the right to use and be instructed in a minority language, or the right to cultural autonomy, for instance) guaranteed to the individual can also be considered as intrinsic community rights. 16
Other key international documents for the protection of human rights
Convention against Torture or Other Cruel, Inhuman or Degrading Treatment or Punishment:
Monitoring body: Committee Against Torture (CAT)
International Covenant on the Elimination of All Forms of Racial Discrimination (ICERD):
Monitoring body: Committee on the Elimination of Racial Discrimination (CERD)
Convention on the Elimination of All Forms of Discrimination against Women (CEDAW):
Monitoring body: Committee on the Elimination of Discrimination against Women (CEDW)
Convention on the Rights of the Child (CRC):
Optional Protocol to the Convention on the Rights of the Child on the involvement of children in armed conflict
Monitoring body: Committee on the Rights of the Child (CRC)
[Back to Top]
Customary international law (custom)Customary international law (or simply "custom") is the term used to describe a general and consistent practice followed by States deriving from a sense of legal obligation. Thus, for example, while the Universal Declaration of Human Rights is not in itself a binding treaty, some of its provisions have the character of customary international law.17 As such, custom is an important source of international law along with: treaties, general principles, judicial decisions and teachings.18
Declarations, recommendations etc. adopted by UN organsGeneral norms of international law principles and practices that most States would agree are often stated in declarations, proclamations, standard rules, guidelines, recommendations and principles. While they have no binding legal effect on States, they nevertheless represent a broad consensus on the part of the international community and, therefore, have a strong and undeniable moral force on the practice of States in their conduct of international relations. The value of such instruments rests on their recognition and acceptance by a large number of States, and, even without binding legal effect, they may be seen as declaratory of broadly accepted principles within the international community.19
Those instruments are sometimes qualified as "soft law." Some legal scholars and law professionals argue that they are actually gaining the force of "hard law" because of their near universal acceptance, and also because they are detailed developments of the broader treaty language obligations. To that extent, they may be viewed as playing a role similar to a law or regulation that puts into daily practice a constitutional provision.20
Human rights education"Human rights education can be defined as education, training and information aiming at building a universal culture of human rights through the sharing of knowledge, imparting of skills and molding of attitudes directed to:21
International Bill of RightsThe International Bill of Human Rights consists of the Universal Declaration of Human Rights, the International Covenant on Economic, Social and Cultural Rights, and the International Covenant on Civil and Political Rights and its two Optional Protocols.22
RatificationRatification is a States formal expression of consent to be bound by a treaty. Only a State that has previously signed the treaty (during the period when the treaty was open for signature) can ratify it. Ratification consists of two procedural acts: on the domestic level, it requires approval by the appropriate constitutional organ (usually the head of State or parliament). On the international level, pursuant to the relevant provision of the treaty in question, the instrument of ratification shall be formally transmitted to the depositary, which may be a State or an international organization such as the United Nations.23
States Party(ies)The expression refers to the States that have ratified a covenant or a convention and are thereby bound to conform to its provisions.
State responsibility for human rightsThe obligation to protect, promote and ensure the enjoyment of human rights is the prime responsibility of States, thereby conferring on States responsibility for the human rights of individuals. Many human rights are owed by States to all people within their territories, while certain human rights are owed by a State to particular groups of people: for example, the right to vote in elections is only owed to citizens of a State. State responsibilities include the obligation to take pro-active measures to ensure that human rights are protected by providing effective remedies to persons whose rights are violated, as well as measures against violating the rights of persons within its territory.24
Under international law, the enjoyment of certain rights can be restricted in specific circumstances. For example, if an individual is found guilty of a crime after a fair trial, the State may lawfully restrict a persons freedom of movement by imprisonment. Restrictions on civil and political rights may only be imposed if the limitation is determined by law, but only for the purposes of securing due recognition of the rights of others and of meeting the just requirements of morality, public order and the general welfare in a democratic society. Economic, social and cultural rights may be limited by law, but only insofar as the limitation is compatible with the nature of the rights and solely to promote the general welfare in a democratic society.25 It is important to distinguish between these "restrictions" and "derogations," the latter meaning the partial abrogation of a law.26 When a state derogates from a law, it enacts something which is contrary to it. While many rights are derogable, especially in situations of political emergencies or crises, there are some basic rights which are considered as non-derogable.
TreatyA treaty is an agreement by States to be bound by particular rules. International treaties have different designations such as covenants, charters, protocols,conventions, accords and agreements. A treaty is legally binding on those States which have consented to be bound by the provisions of the treaty.27 It is important to note that States can make reservations to international human rights treaties; some can have an important impact on the application of the treaty. The legitimacy and role of such reservations is a heavily contested issue.28
[Back to Top] 29 In other words, writing a history of human rights remains, in itself, a sensitive issue.
Go to debate Human rights: universalism versus cultural relativism
Written precursors to modern human rights thinking include the Magna Charta Libertatum (1215) and the English Bill of Rights (1689). Earlier documents specified rights, which could be claimed in the light of particular circumstances (e.g. threats to the freedom of religion), but they did not yet contain an all-embracing philosophical concept of individual liberty. Freedoms were often seen as rights conferred upon individuals or groups by virtue of their rank or status. In the centuries after the Middle Ages, the concept of liberty became gradually separated from status and came to be seen not as a privilege but as a right of all human beings.
But the concept of human rights itself emerged as an explicit category in the 18th century Age of Enlightenment. Every human being came to be seen as an autonomous individual, endowed by nature with certain inalienable fundamental rights that could be invoked against a government and should be safeguarded by it. Human rights were henceforth seen as elementary preconditions for an existence worthy of human dignity. The Enlightenment was decisive in the development of human rights concepts. The ideas of Hugo Grotius (1583-1645), and other fathers of modern international law attracted much interest in Europe in the 18th century. In parallel, different philosophers conceptualized the social contract theory: Thomas Hobbes (1588-1679), John Locke (1632-1704) and Jean-Jacques Rousseau (1712-1778). However, they differed in the amount of rights contracted away from the individual to the sovereign. Hobbes argued that the sovereign must take absolute control, while Locke and Rousseau maintained that many, if not most rights remained with the individual and, importantly, that the contract was reversible, i.e. could be rescinded by the population if the sovereign was not keeping to his part of the bargain. 30
The French Declaration on the Rights of Man and Citizen (1789), and the US Constitution and Bill of Rights (1791) are considered foundational documents in the history of the human rights movement. The American Declaration of Independence of 4 July 1776 was based on the assumption that all human beings are equal. It also referred to certain inalienable rights, such as the right to life, liberty and the pursuit of happiness. These ideas were also reflected in the Bill of Rights which was promulgated by the state of Virginia in the same year. The provisions of the Declaration of Independence were adopted by other American states, but they also found their way into the Bill of Rights of the American Constitution. The French Déclaration des Droits de l'Homme et du Citoyen of 1789, as well as the French Declaration of 1793, reflected the emerging international theory of universal rights and contained for the first time the term "human rights".
The atrocities of World War II put an end to the traditional view that states have full liberty to decide the treatment of their own citizens. The signing of the Charter of the United Nations (UN) on 26 June 1945 brought human rights within the sphere of international law. In particular, all UN members agreed to take measures to protect human rights.31 The Charter contains a number of articles specifically referring to human rights. Less than two years later, the UN Commission on Human Rights (UNCHR), established early in 1946, submitted a draft Universal Declaration of Human Rights (UDHR). The UN General Assembly (UNGA) adopted the Declaration in Paris on 10 December 1948, which passed without a single vote against it, including countries from every region of the world. This day was later designated Human Rights Day.
During the 1950s and 1960s, more and more countries joined the UN. Upon joining, they formally accepted the obligations contained in the UN Charter, and in doing so subscribed to the principles and ideals laid down in the UDHR. The UDHR is supported by a large number of international conventions (nine core human rights instruments) and their respective supervisory mechanisms, which together constitute the modern international human rights regime.
Human rights have also been receiving more and more attention at the regional level. In the European, the Inter-American and the African context, standards and supervisory mechanisms have been developed that have already had a significant impact on human rights compliance in the respective continents, and promise to contribute to compliance in the future:32
2. This was reaffirmed, for instance, in the Vienna Declaration and Programme of Action at the end of the UN World Conference on Human Rights (1993); see para. 5 of the Vienna Declaration and Programme of Action.
3. Jerome J Shestack, "The Philosophic Foundations of Human Rights," Human Rights Quarterly 20, no. 2 (1998): 202-204.
4. Comment by Ebrahim Afsah, 9 November 2008.
5. Comment by Bill O'Neill, 10 October 2008.
6. Sepulveda et al., Human Rights Reference Handbook (Cuidad Colon: University of Peace, 2004), 7.
7. Comment by Bill O'Neill, 10 October 2008.
8. Comment by Ebrahim Afsah, 9 November 2008.
9. See Articles 1 of the International Covenant on Civil and Political Rights and the International Covenant on Economic, Social and Cultural Rights.
10. Human Rights Reference Handbook, 12-13.
11. Human Rights Reference Handbook, 13.
13. Ibid; see also United Nations Office of the High Commissioner for Human Rights (OHCHR) and United Nations Staff College, Human Rights: A Basic Handbook for UN Staff (Geneva, 2001).
14. Human Rights Reference Handbook, 8-10; see also Human Rights: A Basic Handbook for UN Staff:
15. Declaration on the Right to Development, adopted by the United Nations General Assembly on 4 December 1986, and the 1993 Vienna Declaration and Programme of Action (Paragraph I, 10).
16. Comment by Ebrahim Afsah, 9 November 2008.
17. Human Rights: A Basic Handbook for UN Staff, 5.
18. Article 38 (1) of the Statute of the International Court of Justice.
19. Ibid, 6-7.
20. Comment by Bill O'Neill, 10 October 2008.
21. United Nations Education, Scientific and Cultural Organization (UNESCO), Definition of Human Rights Education.
22. Office of the High Commissioner for Human Rights (OHCHR), Fact Sheet No.2 (Rev.1), The International Bill of Human Rights.
23. Human Rights: A Basic Handbook for UN Staff, 4.
24. Human Rights: A Basic Handbook for UN Staff, 5.
26. Comment by Bill O'Neill, 10 October 2008.
27. Ibid, 4.
28. Eric Neumayer, Qualified Ratification: Explaining Reservations to International Human Rights Treaties (July 2006).
29. Comment by Ebrahim Afsah, 9 November 2008. See also Naz K. Modirzadeh. "Taking Islamic Law Seriously: INGOs and the Battle for Muslim Hearts and Minds," Harvard Human Rights Journal 19: (191-233).
30. Comment by Ebrahim Afsah, 9 November 2008.
31. Article 1-3 of the UN Charter: [The United Nations is mandated] "To achieve international co-operation in solving international problems of an economic, social, cultural, or humanitarian character, and in promoting and encouraging respect for human rights and for fundamental freedoms for all without distinction as to race, sex, language, or religion."
32. The Association of SouthEast Asian Nations (ASEAN) is in the process of adopting a regional human rights charter. | http://peacebuildinginitiative.org/index.cfm?pageId=1847 | 13 |
16 | The narrow majority by which John Adams was elected did not accurately reflect the existing state of party strength. The electoral college system, by its nature, was apt to distort the situation. Originally the electors voted for two persons without designating their preference for President. There was no inconvenience on that account while Washington was a candidate, since he was the first choice of all the electors; but in 1796, with Washington out of the field, both parties were in the dilemma that, if they voted solidly for two candidates, the vote of the electoral college would not determine who should be President. To avert this situation, the adherents of a presidential candidate would have to scatter votes meant to have only vice-presidential significance. This explains the wide distribution of votes that characterized the working of the system until it was p196changed by the Twelfth Amendment adopted in 1804.
In 1796, the electoral college gave votes to thirteen candidates. The Federalist ticket was John Adams and Thomas Pinckney of South Carolina. Hamilton urged equal support of both as the surest way to defeat Jefferson; but eighteen Adams electors in New England withheld votes from Pinckney to make sure that he should not slip in ahead of Adams. Had they not done so, Pinckney would have been chosen President, a possibility which Hamilton foresaw because of Pinckney's popularity in the South. New York, New Jersey, and Delaware voted solidly for Adams and Pinckney as Hamilton had recommended, but South Carolina voted solidly for both Jefferson and Pinckney, and moreover Pinckney received scattering votes elsewhere in the South. The action of the Adams electors in New England defeated Pinckney, and gave Jefferson the vice-presidency, the vote for the leading candidates being 71 for Adams, 68 for Jefferson, and 59 for Pinckney. The tendency of such conditions to inspire political feuds and to foster factional animosity is quite obvious. This situation must be borne in mind, in order to make intelligible the course of Adams's administration.
p197 Adams had an inheritance of trouble from the same source which had plagued Washington's administration, — the efforts of revolutionary France to rule the United States. In selecting Monroe to succeed Morris, Washington knew that the former was as friendly to the French Revolution as Morris had been opposed to it, and hence he hoped that Monroe would be able to impart a more friendly feeling to the relations of the two countries. Monroe arrived in Paris just after the fall of Robespierre. The Committee of Public Safety then in possession of the executive authority hesitated to receive him. Monroe wrote to the President of the National Convention then sitting, and a decree was at once passed that the Minister of the United States should "be introduced in the bosom of the Convention." Monroe presented himself on August 15, 1794, and made a glowing address. He descanted upon the trials by which America had won her independence and declared that "France, our ally and friend, and who aided in the contest, has now embarked in the same noble career." The address was received with enthusiasm, the President of the Convention drew Monroe to his bosom in a fraternal embrace; and it was decreed that p198"the flags of the United States of America shall be joined to those of France, and displayed in the hall of the sittings of the Convention, in sign of the union and eternal fraternity of the two peoples." In compliance with this decree Monroe soon after presented an American flag to the Convention.
When the news of these proceedings reached the State Department, a sharp note was sent to Monroe "to recommend caution lest we be obliged at some time or other to explain away or disavow an excess of fervor, so as to reduce it down to the cool system of neutrality." The French Government regarded the Jay treaty as an affront and as a violation of our treaties with France. Many American vessels were seized and confiscated with their cargoes, and hundreds of American citizens were imprisoned. Washington thought that Monroe was entirely too submissive to such proceedings; therefore, on August 22, 1796, Monroe was recalled and soon after Charles Cotesworth Pinckney was appointed in his stead.
The representation of France in the United States had been as mutable as her politics. Fauchet, who succeeded Genet, retired in June, 1795, and was succeeded by Adet, who like his predecessors, carried on active interference with American p199politics, and even attempted to affect the presidential election by making public a note addressed to the Secretary of State complaining of the behavior of the Administration. In Adams's opinion this note had some adverse effect in Pennsylvania but no other serious consequences, since it was generally resented. Meanwhile Pinckney arrived in France in December, 1796, and the Directory refused to receive him. He was not even permitted to remain in Paris; but honors were showered upon Monroe as he took his leave. In March, 1797, Adet withdrew, and diplomatic relations between the two countries were entirely suspended. By a decree made two days before Adams took office, the Directory proclaimed as pirates, to be treated without mercy, all Americans found serving on board British vessels, and ordered the seizure of all American vessels not provided with lists of their crews in proper form. Though made under cover of the treaty of 1778, this latter provision ran counter to its spirit and purpose. Captures of American ships began at once. As Joel Barlow wrote, the decree of March 2, 1797, "was meant to be little short of a declaration of war."
The curious situation which ensued from the p200efforts made by Adams to deal with this emergency cannot be understood without reference to his personal peculiarities. He was vain, learned, and self-sufficient, and he had the characteristic defect of pedantry: he overrated intelligence and he underrated character. Hence he was inclined to resent Washington's eminence as being due more to fortune than to merit, and he had for Hamilton an active hatred compounded of wounded vanity and a sense of positive injury. He knew that Hamilton thought slightingly of his political capacity and had worked against his political advancement, and he was too lacking in magnanimity to do justice to Hamilton's motives. His state of mind was well known to the Republican leaders, who hoped to be able to use him. Jefferson wrote to Madison suggesting that "it would be worthy of consideration whether it would not be for the public good to come to a good understanding with him as to his future elections." Jefferson himself called on Adams and showed himself desirous of cordial relations. Mrs. Adams responded by expressions of pleasure at the success of Jefferson, between whom and her husband, she said, there had never been "any public or private animosity." Such rejoicing over the p201defeat of the Federalist candidate for Vice-President did not promote good feeling between the President and the Federalist leaders.
The morning before the inauguration, Adams called on Jefferson and discussed with him the policy to be pursued toward France. The idea had occurred to Adams that a good impression might be made by sending out a mission of extraordinary weight and dignity, and he wanted to know whether Jefferson himself would not be willing to head such a mission. Without checking Adams's friendly overtures, Jefferson soon brought him to agree that it would not be proper for the Vice-President to accept such a post. Adams then proposed that Madison should go. On March 6, Jefferson reported to Adams that Madison would not accept. Then for the first time, according to Adams's own account, he consulted a member of his Cabinet, supposed to be Wolcott although the name is not mentioned.
Adams took over Washington's Cabinet as it was finally constituted after the retirement of Jefferson and Hamilton and the virtual expulsion of Randolph. The process of change had made it entirely Federalist in its political complexion, and entirely devoted to Washington and Hamilton in its personal p202sympathies. That Adams should have adopted it as his own Cabinet has been generally regarded as a blunder, but it was a natural step for him to take. To get as capable men to accept the portfolios as those then holding them would have been difficult, so averse had prominent men become to putting themselves in a position to be harried by Congress, with no effective means of explaining and justifying their conduct. Congress then had a prestige which it does not now possess, and its utterances then received consideration not now accorded. Whenever presidential electors were voted for directly by the people, the poll was small compared with the vote for members of Congress. Moreover, there was then a feeling that the Cabinet should be regarded as a bureaucracy, and for a long period this conception tended to give remarkable permanence to its composition.
When the personal attachments of the Cabinet chiefs are considered, it is easy to imagine the dismay and consternation produced by the dealings of Adams with Jefferson. By the time Adams consulted the members of his Cabinet, they had become suspicious of his motives and distrustful of his character. Before long they were writing to Washington and Hamilton for advice, and were p203endeavoring to manage Adams by concerted action. In this course they had the cordial approval of leading Federalists, who would write privately to members of the Cabinet and give counsel as to procedure. Wolcott, a Federalist leader in Connecticut, warned his son, the Secretary of the Treasury, that Adams was "a man of great vanity, pretty capricious, of a very moderate share of prudence, and of far less real abilities than he believes himself to possess," so that "it will require a deal of address to render him the service which it will be essential for him to receive."
The policy to be pursued was still unsettled when news came of the insulting rejection of Pinckney and the domineering attitude assumed by France. On March 25, Adams issued a call for the meeting of Congress on May 15, and then set about getting the advice of his Cabinet. He presented a schedule of interrogatories to which he asked written answers. The attitude of the Cabinet was at first hostile to Adams's favorite notion of a special mission, but as Hamilton counseled deference to the President's views, the Cabinet finally approved the project. Adams appointed John Marshall of Virginia and Elbridge Gerry of Massachusetts p204to serve in conjunction with Pinckney, who had taken refuge in Holland.
Strong support for the Government in taking a firm stand against France was manifested in both Houses of Congress. Hamilton aided Secretary Wolcott in preparing a scheme of taxation by which the revenue could be increased to provide for national defense. With the singular fatality that characterized Federalist party behavior throughout Adams's Administration, however, all the items proposed were abandoned except one for stamp taxes. What had been offered as a scheme whose particulars were justifiable by their relation to the whole was converted into a measure which was traditionally obnoxious in itself, and was now made freshly odious by an appearance of discrimination and partiality. The Federalists did improve their opportunity in the way of general legislation: much needed laws were passed to stop privateering, to protect the ports, and to increase the naval armament; and Adams was placed in a much better position to maintain neutrality than Washington had been. Fear of another outbreak of yellow fever accelerated the work of Congress, and the extra session lasted only a little over three weeks.
Such was the slowness of communication in those p205days that, when Congress reassembled at the regular session in November, no decisive news had arrived of the fate of the special mission. Adams with proper prudence thought it would be wise to consider what should be done in case of failure. On January 24, 1798, he addressed to the members of his Cabinet a letter requesting their views. No record is preserved of the replies of the Secretaries of State and of the Treasury. Lee, the Attorney-General, recommended a declaration of war. McHenry, the Secretary of War, offered a series of seven propositions to be recommended to Congress: 1. Permission to merchant ships to arm; 2. The construction of twenty sloops of war; 3. The completion of frigates already authorized; 4. Grant to the President of authority to provide ships of the line, not exceeding ten, "by such means as he may judge best." 5. Suspension of the treaties with France; 6. An army of sixteen thousand men, with provision for twenty thousand more should occasion demand; 7. A loan and an adequate system of taxation.
These recommendations are substantially identical with those made by Hamilton in a letter to Pickering, and the presumption is strong that McHenry's paper is a product of Hamilton's p206influence, and that it had the concurrence of Pickering and Wolcott. The suggestion that the President should be given discretionary authority in the matter of procuring ships of the line contemplated the possibility of obtaining them by transfer from England, not through formal alliance but as an incident of a coöperation to be arranged by negotiation, whose objects would also include aid in placing a loan and permission for American ships to join British convoys. This feature of McHenry's recommendations could not be carried out. Pickering soon informed Hamilton that the old animosities were still so active "in some breasts" that the plan of coöperation was impracticable.
Meanwhile the composite mission had accomplished nothing except to make clear the actual character of French policy. When the envoys arrived in France, the Directory had found in Napoleon Bonaparte an instrument of power that was stunning Europe by its tremendous blows. That instrument had not yet turned to the reorganization of France herself, and at the time it served the rapacious designs of the Directory. Europe was looted wherever the arms of France prevailed, and the levying of tribute both on public and on private account was the order of the day. p207Talleyrand was the Minister of Foreign Affairs, and he treated the envoys with a mixture of menace and cajolery. It was a part of his tactics to sever the Republican member, Gerry, from his Federalist colleagues. Gerry was weak enough to be caught by Talleyrand's snare, and he was foolish enough to attribute the remonstrances of his colleagues to vanity. "They were wounded," he wrote, "by the manner in which they had been treated by the Government of France, and the difference which had been used in respect to me." Gerry's conduct served to weaken and delay the negotiations, but he eventually united with his colleagues in a detailed report to the State Department, which was transmitted to Congress by the President on April 3, 1798. In the original the names of the French officials concerned were written at full length in the Department cipher. In making a copy for Congress, Secretary Pickering substituted for the names the terminal letters of the alphabet, and hence the report has passed into history as the X. Y. Z. dispatches.
The story, in brief, was that on arriving in Paris the envoys called on Talleyrand, who said that he was busy at that very time on a report to the Directory on American affairs, and in a few days p208would let them know how matters stood. A few days later they received notice through Talleyrand's secretary that the Directory was greatly exasperated by expressions used in President Adams's address to Congress, that the envoys would probably not be received until further conference, and that persons might be appointed to treat with them. A few more days elapsed, and then three persons presented themselves as coming from Talleyrand. They were Hottinguer, Bellamy, and Hauteval, designated as X. Y. Z. in the communication to Congress. They said that a friendly reception by the Directory could not be obtained unless the United States would assist France by a loan, and that "a sum of money was required for the pocket of the Directory and Ministers, which would be at the disposal of M. Talleyrand." This "douceur to the Directory," amounting to approximately $240,000, was urged with great persistence as an indispensable condition of friendly relations. The envoys temporized and pointed out that their Government would have to be consulted on the matter of the loan. The wariness of the envoys made Talleyrand's agents the more insistent about getting the "douceur." At one of the interviews Hottinguer exclaimed:— p209"Gentlemen, you do not speak to the point; it is money; it is expected that you will offer money." The envoys replied that on this point their answer had already been given. " 'No,' said he, 'you have not: what is your answer?' We replied, 'It is no; no; not a sixpence.' " This part of the envoys' report soon received legendary embellishment, and in innumerable stump speeches it rang out as, "Not one cent for tribute; millions for defense!"
The publication of the X. Y. Z. dispatches sent rolling through the country a wave of patriotic feeling before which the Republican leaders quailed and which swept away many of their followers. Jefferson held that the French Government ought not to be held responsible for "the turpitude of swindlers," and he steadfastly opposed any action looking to the use of force to maintain American rights. Some of the Republican members of Congress, however, went over to the Federalist side, and Jefferson's party was presently reduced to a feeble and dispirited minority. Loyal addresses rained upon Adams. There appeared a new national song, Hail Columbia, which was sung all over the land and which was established in lasting popularity. Among its well-known lines is an exulting stanza beginning:
This is an allusion to the fact that Washington had left his retirement to take charge of the national forces. The envoys had been threatened that, unless they submitted to the French demands, the American Republic might share the fate of the Republic of Venice. The response of Congress was to vote money to complete the frigates, the United States, the Constitution, and the Constellation, work on which had been suspended when the Algerine troubles subsided; and further, to authorize the construction or purchase of twelve additional vessels. For the management of this force, the Navy Department was created by the Act of April 30, 1798. By an Act of May 28, the President was authorized to raise a military force of ten thousand men, the commander of which should have the services of "a suitable number of major-generals." On July 7, the treaties with France that had so long vexed the United States were abrogated.
The operations of the Navy Department soon showed that American sailors were quite able and willing to defend the nation if they were allowed the opportunity. In December, 1798, the Navy p211Department worked out a plan of operations in the enemy's waters. To repress the depredations of the French privateers in the West Indies, a squadron commanded by Captain John Barry was sent to cruise to the windward of St. Kitts as far south as Barbados, and it made numerous captures. A squadron under Captain Thomas Truxtun cruised in the vicinity of Porto Rico. The flagship was the frigate Constellation, which on February 9, 1799, encountered the French frigate, L'Insurgente, and made it strike its flag after an action lasting only an hour and seventeen minutes. The French captain fought well, but he was put at a disadvantage by losing his topmast at the opening of the engagement, so that Captain Truxtun was able to take a raking position. The American loss was only one killed and three wounded, while L'Insurgente had twenty-nine killed and forty-one wounded. On February 1, 1800, the Constellation fought the heavy French frigate Vengeance from about eight o'clock in the evening until after midnight, when the Vengeance lay completely silenced and apparently helpless. But the rigging and spars of the Constellation had been so badly cut up that the mainmast fell, and before the wreck could be cleared away the Vengeance was able to p212make her escape. During the two years and a half in which hostilities continued, the little navy of the United States captured eighty-five armed French vessels, nearly all privateers. Only one American war vessel was taken by the enemy, and that one had been originally a captured French vessel. The value of the protection thus extended to American trade is attested by the increase of exports from $57,000,000 in 1797 to $78,665,528 in 1799. Revenue from imports increased from $6,000,000 in 1797 to $9,080,932 in 1800.
The creation of an army, however, was attended by personal disagreements that eventually wrecked the Administration. Without waiting to hear from Washington as to his views, Adams nominated him for the command and then tried to overrule his arrangements. The notion that Washington could be hustled into a false position was a strange blunder to be made by anyone who knew him. He set forth his views and made his stipulations with his customary precision, in letters to Secretary McHenry, who had been instructed by Adams to obtain Washington's advice as to the list of officers. Washington recommended as major-generals, Hamilton, C.C. Pinckney, and Knox, in that order of rank. Adams made some demur to p213the preference shown for Hamilton, but McHenry showed him Washington's letter and argued the matter so persistently that Adams finally sent the nominations to the Senate in the same order as Washington had requested. Confirmation promptly followed, and a few days later Adams departed for his home at Quincy, Massachusetts, without notice to his Cabinet. It soon appeared that he was in the sulks. When McHenry wrote to him about proceeding with the organization of the army, he replied that he was willing provided Knox's precedence was acknowledged, and he added that the five New England States would not patiently submit to the humiliation of having Knox's claim disregarded.
From August 4 to October 13, wrangling over this matter went on. The members of the Cabinet were in a difficult position. It was their understanding that Washington's stipulations had been accepted, but the President now proposed a different arrangement. Pickering and McHenry wrote to Washington explaining the situation in detail. News of the differences between Adams and Washington of course soon got about and caused a great buzz in political circles. Adams became angry over the opposition he was meeting, and on p214August 29 he wrote to McHenry that "there has been too much intrigue in this business, both with General Washington and with me"; that it might as well be understood that in any event he would have the last say, "and I shall then determine it exactly as I should now, Knox, Pinckney, and Hamilton." Washington stood firm and, on September 25, wrote to the President demanding "that he might know at once and precisely what he had to expect." In reply Adams said that he had signed the three commissions on the same day in the hope "that an amicable adjustment or acquiescence might take place among the gentlemen themselves." But should this hope be disappointed, "and controversies shall arise, they will of course be submitted to you as commander-in-chief."
Adams, of course, knew quite well that such matters did not settle themselves, but he seems to have imagined that all he had to do was to sit tight and that matters would have to come his way. The tricky and shuffling behavior to which he descended would be unbelievable of a man of his standing were there not an authentic record made by himself. The suspense finally became so intolerable that the Cabinet acted without consulting the President any longer on the point. The Secretary p215of War submitted to his colleagues all the correspondence in the case and asked their advice. The Secretaries of State, of the Treasury, and of the Navy made a joint reply declaring "the only inference which we can draw from the facts before stated, is, that the President consents to the arrangement of rank as proposed by General Washington," and that therefore "the Secretary of War ought to transmit the commissions, and inform the generals that in his opinion the rank is definitely settled according to the original arrangement." This was done; but Knox declined an appointment ranking him below Hamilton and Pinckney. Thus, Adams despite his obstinacy, was completely baffled, and a bitter feud between him and his Cabinet was added to the causes now at work to destroy the Federalist party.
The Federalist military measures were sound and judicious, and the expense, although a subject of bitter denunciation, was really trivial in comparison with the national value of the enhanced respect and consideration obtained for American interests. But these measures were followed by imprudent acts for regulating domestic politics. By the Act of June 18, 1798, the period of residence required before an alien could be admitted to p216American citizenship was raised from five years to fourteen. By the Act of June 25, 1798, the efficacy of which was limited to two years, the President might send out of the country "such aliens as he shall judge dangerous to the peace and safety of the United States, or shall have reasonable grounds to suspect are concerned in any treasonable or secret machinations against the government thereof." The state of public opinion might then have sanctioned these measures had they stood alone, but they were connected with another which proved to be the weight that pulled them all down. By the Act of July 14, 1798, it was made a crime to write or publish "any false, scandalous, and malicious" statements about the President or either House of Congress, to bring them "into contempt or disrepute," or to "stir up sedition within the United States."
There were plenty of precedents in English history for legislation of such character. Robust examples of it were supplied in England at that very time. There were also strong colonial precedents. According to Secretary Wolcott, the sedition law was "merely a copy from a statute of Virginia in October, 1776." But a revolutionary Whig measure aimed at Tories was a very different p217thing in its practical aspect from the same measure used by a national party against a constitutional opposition. Hamilton regarded such legislation as impolitic, and, on hearing of the sedition bill, he wrote a protesting letter, saying, "Let us not establish tyranny. Energy is a very different thing from violence."
But in general the Federalist leaders were so carried away by the excitement of the times that they could not practice moderation. Their zealotry was sustained by political theories which made no distinction between partisanship and sedition. The constitutional function of partisanship was discerned and stated by Burke in 1770, but his definition of it, as a joint endeavor to promote the national interest upon some particular principle, was scouted at the time and was not allowed until long after. The prevailing idea in Washington's time, both in England and America, was that partisanship was inherently pernicious and ought to be suppressed. Washington's Farewell Address warned the people "in the most solemn manner against the baneful effects of the spirit of party." The idea then was that government was wholly the affair of constituted authority, and that it was improper for political activity to surpass the p218appointed bounds. Newspaper criticism and partisan oratory were among the things in Washington's mind when he censured all attempts "to direct, control, counteract, or awe the regular deliberation and action of the constituted authorities." Hence judges thought it within their province to denounce political agitators when charging a grand jury. Chief Justice Ellsworth, in a charge delivered in Massachusetts, denounced "the French system-mongers, from the quintumvirateº at Paris to the Vice-President and minority in Congress, as apostles of atheism and anarchy, bloodshed, and plunder." In charges delivered in western Pennsylvania, Judge Addison dealt with such subjects as Jealousy of Administration and Government, and the Horrors of Revolution. Washington, then in private life, was so pleased with the series that he sent a copy to friends for circulation.
Convictions under the sedition law were few, but there were enough of them to cause great alarm. A Jerseyman, who had expressed a wish that the wad of a cannon, fired as a salute to the President, had hit him on the rear bulge of his breeches, was fined $100. Matthew Lyon of Vermont, while canvassing for reëlection to Congress, charged the President with "unbounded thirst for ridiculous p219pomp, foolish adulation, and a selfish avarice." This language cost him four months in jail and a fine of $1000. But in general the law did not repress the tendencies at which it was aimed but merely increased them.
The Republicans, too weak to make an effective stand in Congress, tried to interpose state authority. Jefferson drafted the Kentucky Resolutions, adopted by the state legislature in November, 1798. They hold that the Constitution is a compact to which the States are parties, and that "each party has an equal right to judge for itself as well of infractions as of the mode and measure of redress." The alien and sedition laws were denounced, and steps were proposed by which protesting States "will concur in declaring these Acts void and of no force, and will each take measures of its own for providing that neither these Acts, nor any others of the general Government, not plainly and intentionally authorized by the Constitution, shall be exercised within their respective territories." The Virginia Resolutions, adopted in December, 1798, were drafted by Madison. They view "the powers of the federal Government as resulting from the compact to which the States are parties," and declare that, if those powers are exceeded, the p220States "have the right and are in duty bound to interpose." This doctrine was a vial of woe to American politics until it was cast down and shattered on the battlefield of civil war.a It was invented for a partisan purpose, and yet was entirely unnecessary for that purpose.
The Federalist party as then conducted was the exponent of a theory of government that was everywhere decaying. The alien and sedition laws were condemned and discarded by the forces of national politics, and state action was as futile in effect as it was mischievous in principle. It diverted the issue in a way that might have ultimately turned to the advantage of the Federalist party, had it possessed the usual power of adaptation to circumstances. After all, there was no reason inherent in the nature of that party why it should not have perpetuated its organization and repaired its fortunes by learning how to derive authority from public opinion. The needed transformation of character would have been no greater than has often been accomplished in party history. Indeed, there is something abnormal in the complete prostration and eventual extinction of the Federalist party; and the explanation is to be found in the extraordinary character of Adams's administration. p221It gave such prominence and energy to individual aims and interests that the party was rent to pieces by them.
In communicating the X. Y. Z. dispatches to Congress, Adams declared: "I will never send another Minister to France without assurance that he will be received, respected, and honored, as the representative of a great, free, powerful, and independent nation." But on receiving an authentic though roundabout intimation that a new mission would have a friendly reception, he concluded to dispense with direct assurances, and, without consulting his Cabinet, sent a message to the Senate on February 18, 1799, nominating Murray, then American Minister to Holland, to be Minister to France. This unexpected action stunned the Federalists and delighted the Republicans as it endorsed the position they had always taken that war talk was folly and that France was ready to be friendly if America would treat her fairly. "Had the foulest heart and the ablest head in the world," wrote Senator Sedgwick to Hamilton, "been permitted to select the most embarrassing and ruinous measure, perhaps it would have been precisely the one which has been adopted." Hamilton advised that "the measure must go into effect with the additional p222idea of a commission of three." The committee of the Senate to whom the nomination was referred made a call upon Adams to inquire his reasons. According to Adams's own account, they informed him that a commission would be more satisfactory to the Senate and to the public. According to Secretary Pickering, Adams was asked to withdraw the nomination and refused, but a few days later, on hearing that the committee intended to report against confirmation, he sent in a message nominating Chief Justice Ellsworth and Patrick Henry, together with Murray, as envoys extraordinary. The Senate, much to Adams's satisfaction, promptly confirmed the nominations, but this was because Hamilton's influence had smoothed the way. Patrick Henry declined, and Governor Davie of North Carolina was substituted. By the time this mission reached France, Napoleon Bonaparte was in power and the envoys were able to make an acceptable settlement of the questions at issue between the two countries. The event came too late to be of service to Adams in his campaign for reëlection, but it was intensely gratifying to his self-esteem.
Some feelers were put forth to ascertain whether Washington could not be induced to be a candidate p223again, but the idea had hardly developed before all hopes in that quarter were abruptly dashed by his death on December 1, 1799, from a badly treated attack of quinsy. Efforts to substitute some other candidate for Adams proved unavailing, as New England still clung to him on sectional grounds. News of these efforts of course reached Adams and increased his bitterness against Hamilton, whom he regarded as chiefly responsible for them. Adams had a deep spite against members of his Cabinet for the way in which they had foiled him about Hamilton's commission, but for his own convenience in routine matters he had retained them, although debarring them from his confidence. In the spring of 1800 he decided to rid himself of men whom he regarded as "Hamilton's spies." The first to fall was McHenry, whose resignation was demanded on May 5, 1800, after an interview in which — according to McHenry — Adams reproached him with having "biased General Washington to place Hamilton in his list of major-generals before Knox." Pickering refused to resign, and he was dismissed from office on May 12. John Marshall became the Secretary of State, and Samuel Dexter of Massachusetts, Secretary of War. Wolcott retained the Treasury portfolio p224until the end of the year, when he resigned of his own motion.
The events of the summer of 1800 completed the ruin of the Federalist party. That Adams should have been so indifferent to the good will of his party at a time when he was a candidate for reëlection is a remarkable circumstance. A common report among the Federalists was that he was no longer entirely sane. A more likely supposition was that he was influenced by some of the Republican leaders and counted on their political support. In biographies of Gerry it is claimed that he was able to accomplish important results through his influence with Adams. At any rate, Adams gave unrestrained expression to his feelings against Hamilton, and finally Hamilton was aroused to action. On August 1, 1800, he wrote to Adams demanding whether it was true that Adams had "asserted the existence of a British faction in this country" of which Hamilton himself was said to be a leader. Adams did not reply. Hamilton waited until October 1, and then wrote again, affirming "that by whomsoever a charge of the kind mentioned in my former letter, may, at any time, have been made or insinuated against me, it is a base, wicked, and cruel calumny; destitute even p225of a plausible pretext, to excuse the folly, or mask the depravity which must have dictated it."
Hamilton, always sensitive to imputations upon his honor, was not satisfied to allow the matter to rest there. He wrote a detailed account of his relations with Adams, involving an examination of Adams's public conduct and character, which he privately circulated among leading Federalists. It is an able paper, fully displaying Hamilton's power of combining force of argument with dignity of language, but although exhibiting Adams as unfit for his office it advised support of his candidacy. Burr obtained a copy and made such use of parts of it that Hamilton himself had to publish it in full.
In this election the candidate associated with Adams by the Federalists was Charles Cotesworth Pinckney of South Carolina. Though one Adams elector in Rhode Island cut Pinckney, he would still have been elected had the electoral votes of his own State been cast for him as they had been for Thomas Pinckney, four years before; but South Carolina now voted solidly for both Republican candidates. The result of the election was a tie between Jefferson and Burr, each receiving 73 votes, while Adams received 65 and Pinckney 64. p226The election was thus thrown into the House, where some of the Federalists entered into an intrigue to give Burr the Presidency instead of Jefferson, but this scheme was defeated largely through Hamilton's influence. He wrote: "If there be a man in this world I ought to hate, it is Jefferson. With Burr I have always been personally well. But the public good must be paramount to every private consideration."
The result of the election was a terrible blow to Adams. His vanity was so hurt that he could not bear to be present at the installation of his successor, and after working almost to the stroke of midnight signing appointments to office for the defeated Federalists, he drove away from Washington in the early morning before the inauguration ceremonies began. Eventually he soothed his self-esteem by associating his own trials and misfortunes with those endured by classical heroes. He wrote that Washington, Hamilton, and Pinckney formed a triumvirate like that of Antony, Octavius, and Lepidus, and "that Cicero was not sacrificed to the vengeance of Antony more egregiously than John Adams was to the unbridled and unbounded ambition of Alexander Hamilton in the American triumvirate."
a The author's contention that the War Between the States was a good thing because it voided by force of arms the cornerstone of the Constitution, the great virtue that made American polity unique in the history of nations — the rights of the States, ensuring the impossibility of federal tyranny — the contention is a monstrous doctrine. What could be the harm that States should choose to go their own way? Where is it stated in the Constitution that a State cannot secede? and does that Constitution itself not explicitly state that rights not spelled out in it continue to belong to the States that contracted to adopt it? The War Between the States, in which four hundred thousand people died and another half million maimed in order to prevent States from deciding their own laws and destiny, finds a justification only in the minds of those who worship power over right and law; and in the ever-spiralling encroachments of the modern federal government on the rights of states and individuals, Americans continue to be paid back many times over for the shortsightedness of those that insisted on war and coercion instead of respect for the Constitution.
Images with borders lead to more information.
Washington and His Colleagues
A page or image on this site is in the public domain ONLY
Page updated: 17 Nov 07 | http://penelope.uchicago.edu/Thayer/E/Gazetteer/Places/America/United_States/_Topics/history/_Texts/FORWHC/9*.html | 13 |
19 | History of Connecticut
The U.S. state of Connecticut began as three distinct settlements, referred to at the time as "Colonies" or "Plantations". These ventures were eventually combined under a single royal charter in 1662.
Colonies in Connecticut
Various Algonquian tribes inhabited the area prior to European settlement. The Dutch were the first Europeans in Connecticut. In 1614 Adriaen Block explored the coast of Long Island Sound, and sailed up the Connecticut River at least as far as the confluence of the Park River, site of modern Hartford, Connecticut. By 1623, the new Dutch West India Company regularly traded for furs there and ten years later they fortified it for protection from the Pequot Indians as well as from the expanding English colonies. They fortified the site, which was named "House of Hope" (also identified as "Fort Hoop", "Good Hope" and "Hope"), but encroaching English colonists made them agree to withdraw in the 1650 Treaty of Hartford, and by 1654 they were gone.
The first English colonists came from the Bay Colony and Plymouth Colony in Massachusetts. They settled at Windsor in 1633, Wethersfield in 1634, and Hartford in 1636. Thomas Hooker led the Hartford group.
In 1631, the Earl of Warwick granted a patent to a company of investors headed by William Fiennes, 1st Viscount Saye and Sele and Robert Greville, 2nd Baron Brooke. They funded the establishment of the Saybrook Colony (named for the two lords) at the mouth of the Connecticut River, where Fort Saybrook, was erected in 1636. Another Puritan group left Massachusetts and started the New Haven Colony farther west on the northern shore of Long Island Sound in 1637. The Massachusetts colonies did not seek to govern their progeny in Connecticut and Rhode Island. Communication and travel were too difficult, and it was also convenient to have a place for nonconformists to go.
The English settlement and trading post at Windsor especially threatened the Dutch trade, since it was upriver and more accessible to Native people from the interior. That fall and winter the Dutch sent a party upriver as far as modern Springfield, Massachusetts spreading gifts to convince the indigenous inhabitants in the area to bring their trade to the Dutch post at Hartford. Unfortunately, they also spread smallpox and, by the end of the 1633–34 winter, the Native population of the entire valley was reduced from over 8,000 to less than 2,000. Europeans took advantage of this decimation by further settling the fertile valley.
The Pequot War
The Pequot War was the first serious armed conflict between the indigenous peoples and the European settlers in New England. The ravages of disease, coupled with trade pressures, invited the Pequots to tighten their hold on the river tribes. Additional incidents began to involve the colonists in the area in 1635, and next spring their raid on Wethersfield prompted the three towns to meet. Following the raid on Wethersfield, the war climaxed when 300 Pequot men, women, and children were burned out of their village, hunted down and massacred.
On May 1, 1637, leaders of Connecticut Colony's river towns each sent delegates to the first General Court held at the meeting house in Hartford. This was the start of self-government in Connecticut. They pooled their militia under the command of John Mason of Windsor, and declared war on the Pequots. When the war was over, the Pequots had been destroyed as a tribe. In the Treaty of Hartford in 1638, the various New England colonies and their Native allies divided the lands of the Pequots amongst themselves.
Under the Fundamental Orders
The River Towns had created a general government when faced with the demands of a war. In 1639, they took the unprecedented step of documenting the source and form of that government. They enumerated individual rights and concluded that a free people were the only source of government's authority. Rapid growth and expansion grew under this new regime.
On April 22, 1662, the Connecticut Colony succeeded in gaining a Royal Charter that embodied and confirmed the self-government that they had created with the Fundamental Orders. The only significant change was that it called for a single Connecticut government with a southern limit at Long Island Sound, and a western limit of the Pacific ocean, which meant that this charter was still in conflict with the New Netherland colony.
Since 1638, the New Haven Colony had been independent of the river towns, but there other factors added to the Charter. New Haven Colony lost its strongest governor, Theophilus Eaton, and suffered economically after losing its only ocean going ship. Furthermore, in the early 1660s the colony harbored several of the regicide judges who had sentenced King Charles I to death. The colony was absorbed by the Connecticut Colony partly as royal punishment by King Charles II for harboring the regicide judges. When the English took New Netherland in the 1660s the new government of the Province of New York claimed the New Haven settlements on Long Island. By January 1665, the merger of the two colonies was completed.
Indian pressures were relieved for some time by the severity and ferocity of the Pequot War. King Philip's War (1675–1676) brought renewed fighting to Connecticut. Although primarily a war affecting Massachusetts, Connecticut provided men and supplies. This war effectively removed any remaining warlike Native American influences in Connecticut.
The Dominion of New England
In 1686, Sir Edmund Andros was commissioned as the Royal Governor of the Dominion of New England. Andros maintained that his commission superseded Connecticut's 1662 charter. At first, Connecticut ignored this situation. But in late October 1687, Andros arrived with troops and naval support. Governor Robert Treat had no choice but to convene the assembly. Andros met with the governor and General Court on the evening of October 31, 1687.
Governor Andros praised their industry and government, but after he read them his commission, he demanded their charter. As they placed it on the table, people blew out all the candles. When the light was restored, the charter was missing. According to legend, it was hidden in the Charter Oak. Sir Edmund named four members to his Council for the Government of New England and proceeded to his capital at Boston.
Since Andros viewed New York and Massachusetts as the important parts of his Dominion, he mostly ignored Connecticut. Aside from some taxes demanded and sent to Boston, Connecticut also mostly ignored the new government. When word arrived that the Glorious Revolution had placed William and Mary on the throne, the citizens of Boston arrested Andros and sent him back to England in chains. The Connecticut court met and voted on May 9, 1689 to restore the old charter. They also reelected Robert Treat as governor each year until 1698.
Territorial disputes
According to the 1650 Treaty of Hartford with the Dutch, the western boundary of Connecticut ran north from the west side of Greenwich Bay "provided the said line come not within 10 miles (16 km) of Hudson River." On the other hand, Connecticut's original charter in 1662 granted it all the land to the "South Sea" (i.e. the Pacific Ocean).
- ALL that parte of our dominions in Newe England in America bounded on the East by Norrogancett River, commonly called Norrogancett Bay, where the said River falleth into the Sea, and on the North by the lyne of the Massachusetts Plantacon, and on the south by the Sea, and in longitude as the lyne of the Massachusetts Colony, runinge from East to West, (that is to say) from the Said Norrogancett Bay on the East to the South Sea on the West parte, with the Islands thervnto adioyneinge, Together with all firme lands ... TO HAVE AND TO HOLD ... for ever....
Dispute with New York
Needless to say, this brought it into territorial conflict with those states which then lay between Connecticut and the Pacific. A patent issued on March 12, 1664, granted the Duke of York "all the land from the west side of Connecticut River to the east side of Delaware Bay." In October, 1664, Connecticut and New York agreed to grant Long Island to New York, and establish the boundary between Connecticut and New York as a line from the Mamaroneck River "north-northwest to the line of the Massachusetts", crossing the Hudson River near Peekskill and the boundary of Massachusetts near the northwest corner of the current Ulster County, New York. This agreement was never really accepted, however, and boundary disputes continued. The Governor of New York issued arrest warrants for residents of Greenwich, Rye, and Stamford, and founded a settlement north of Tarrytown in what Connecticut considered part of its territory in May 1682. Finally, on November 28, 1683, the states negotiated a new agreement establishing the border as 20 miles (32 km) east of the Hudson River, north to Massachusetts. In recognition of the wishes of the residents, the 61,660 acres (249.5 km2) east of the Byram River making up the Connecticut Panhandle were granted to Connecticut. In exchange, Rye was granted to New York, along with a 1.81-mile (2.91 km) wide strip of land running north from Ridgefield to Massachusetts alongside Dutchess, Putnam, and Westchester Counties, New York, known as the "Oblong".
Dispute with Pennsylvania
In the 1750s, the western frontier remained on the other side of New York. In 1754 the Susquehannah Company of Windham, Connecticut obtained from a group of Native Americans a deed to a tract of land along the Susquehanna River which covered about one-third of present-day Pennsylvania. This venture met with the disapproval of not only Pennsylvania, but also of many in Connecticut including the Deputy Governor, who opposed Governor Jonathan Trumbull's support for the company, fearing that pressing these claims would endanger the charter of the colony. In 1769, Wilkes-Barre was founded by John Durkee and a group of 240 Connecticut settlers. The British government finally ruled "that no Connecticut settlements could be made until the royal pleasure was known". In 1773 the issue was settled in favor of Connecticut and Westmoreland, Connecticut was established as a town and later a county.
Pennsylvania did not accede to the ruling, however, and open warfare broke out between them and Connecticut, ending with an attack in July 1778, which killed approximately 150 of the settlers and forced thousands to flee. While they periodically attempted to regain their land, they were continuously repulsed, until, in December 1783, a commission ruled in favor of Pennsylvania. After complex litigation, in 1786, Connecticut dropped its claims by a deed of cession to Congress, in exchange for freedom for war debt and confirmation of the rights to land further west in present-day Ohio, which became known as the Western Reserve. Pennsylvania granted the individual settlers from Connecticut the titles to their land claims. Although the region had been called Westmoreland County, Connecticut, it has no relationship with the current Westmoreland County, Pennsylvania.
The Western Reserve, which Connecticut received in recompense for giving up all claims to any Pennsylvania land in 1786, constituted a strip of land in what is currently northeast Ohio, 120 miles (190 km) wide from east to west bordering Lake Erie and Pennsylvania. Connecticut owned this territory until selling it to the Connecticut Land Company in 1795 for $1,200,000, which resold parcels of land to settlers. In 1796, the first settlers, led by Moses Cleaveland, began a community which was to become Cleveland, Ohio; in a short time, the area became known as "New Connecticut".
An area 25 miles (40 km) wide at the western end of the Western Reserve, set aside by Connecticut in 1792 to compensate those from Danbury, New Haven, Fairfield, Norwalk, and New London who had suffered heavy losses when they were burnt out by fires set by British raids during the War of Independence, became known as the Firelands. By this time, however, most of those granted the relief by the state were either dead or too old to actually move there. The Firelands now constitutes Erie and Huron Counties, as well as part of Ashland County, Ohio.
The American Revolution (1775–1789)
Connecticut was the only one of the 13 colonies involved in the American Revolution that did not have an internal revolution of its own. It had been largely self-governing since its beginnings. Governor Jonathan Trumbull was elected every year from 1769 to 1784. Connecticut's government continued unchanged even after the revolution, until the United States Constitution was adopted in 1789. A Connecticut privateer was the Guilford, formerly the Loyalist privateer Mars.
Several significant events during the American Revolution occurred in Connecticut. Notably, the landing of a British invasion force in Westport, Connecticut which subsequently marched to and burnt the city of Danbury, Connecticut for safeguarding Patriot supplies and was engaged by General David Wooster and General Benedict Arnold on their return in the Battle of Ridgefield in 1777, which would deter future strategic landing attempts by the British for the remainder of the war. The state was also the launching site for a number of raids against Long Island orchestrated by Samuel Holden Parsons and Benjamin Tallmadge, and provided men and material for the war effort, especially to Washington's army outside New York City. General William Tryon raided the Connecticut coast in July 1779, focusing on New Haven, Norwalk, and Fairfield. The French General the Comte de Rochambeau celebrated the first Catholic Mass in Connecticut at Lebanon in summer 1781 while marching through the state from Rhode Island to rendezvous with General George Washington in Dobbs Ferry, New York. New London and Groton Heights were raided in September 1781 by Connecticut native and turncoat Benedict Arnold.
Early National Period (1789–1818)
New England was the stronghold of the Federalist party. One historian explains how well organized it was in Connecticut:
- It was only necessary to perfect the working methods of the organized body of office-holders who made up the nucleus of the party. There were the state officers, the assistants, and a large majority of the Assembly. In every county there was a sheriff with his deputies. All of the state, county, and town judges were potential and generally active workers. Every town had several justices of the peace, school directors and, in Federalist towns, all the town officers who were ready to carry on the party's work. Every parish had a "standing agent," whose anathemas were said to convince at least ten voting deacons. Militia officers, state's attorneys, lawyers, professors and schoolteachers were in the van of this "conscript army." In all, about a thousand or eleven hundred dependent officer-holders were described as the inner ring which could always be depended upon for their own and enough more votes within their control to decide an election. This was the Federalist machine.
Given the power of the Federalists, the Democratic-Republicans had to work harder to win. In 1806, the state leadership sent town leaders instructions for the forthcoming elections. Every town manager was told by state leaders "to appoint a district manager in each district or section of his town, obtaining from each an assurance that he will faithfully do his duty." Then, the town manager was instructed to compile lists and total up the number of taxpayers, the number of eligible voters, how many were "decided republicans," "decided federalists," or "doubtful," and finally to count the number of supporters who were not currently eligible to vote but who might qualify (by age or taxes) at the next election. These highly detailed returns were to be sent to the county manager. They, in turn, were to compile county-wide statistics and send it on to the state manager. Using the newly compiled lists of potential voters, the managers were told to get all the eligibles to the town meetings, and help the young men qualify to vote. At the annual official town meeting, the managers were told to, "notice what republicans are present, and see that each stays and votes till the whole business is ended. And each District-Manager shall report to the Town-Manager the names of all republicans absent, and the cause of absence, if known to him." Of utmost importance, the managers had to nominate candidates for local elections, and to print and distribute the party ticket. The state manager was responsible for supplying party newspapers to each town for distribution by town and district managers. This highly coordinated "get-out-the-vote" drive would be familiar to modern political campaigners, but was the first of its kind in world history.
Connecticut prospered during the era, as the seaports were busy and the first textile factories were built. The American Embargo and the British blockade during the War of 1812 severely hurt the export business, but did help promote the rapid growth of industry. Eli Whitney of New Haven was one of many engineers and inventors who made the state a world leader in machine tools and industrial technology generally. The state was known for its political conservatism, typified by its Federalist party and the Yale College of Timothy Dwight. The foremost intellectuals were Dwight and Noah Webster, who compiled his great dictionary in New Haven. Religious tensions polarized the state, as the established Congregational Church, in alliance with the Federalists, tried to maintain its grip on power. The failure of the Hartford Convention in 1814 wounded the Federalists, who were finally upended by the Republicans in 1817.
Modernization and industry
Up until this time, Connecticut had adhered to the 1662 Charter, and with the independence of the American colonies over forty years prior, much of what the Charter stood for was no longer relevant. In 1818, a new constitution was adopted that was the first piece of written legislation to separate church and state in Connecticut, and give equality all religions. Gubernatorial powers were also expanded as well as increased independence for courts by allowing their judges to serve life terms.
Connecticut started off with the raw materials of abundant running water and navigable waterways, and using the Yankee work ethic quickly became an industrial leader. Between the birth of the U.S. patent system in 1790 and 1930, Connecticut had more patents issued per capita than any other state; in the 1800s, when the U.S. as a whole was issued one patent per three thousand population, Connecticut inventors were issued one patent for every 700–1000 residents. Connecticut's first recorded invention was a lapidary machine, by Abel Buell of Killingworth, in 1765.
Civil War era
As a result of the industrialization of the state and New England as a region, Connecticut manufacturers played a prominent role in supplying the Union Army and Navy with weapons, ammunition, and military materiel during the Civil War. A number of Connecticut residents were generals in the Federal service and Gideon Welles was the United States Secretary of the Navy and a confidant of President Abraham Lincoln.
Starting in the 1830s, and accelerating when Connecticut abolished slavery entirely in 1848, African Americans from in- and out-of-state began relocating to urban centers for employment and opportunity, forming new neighborhoods such as Bridgeport's Little Liberia.
Twentieth century
Connecticut factories in Bridgeport, New Haven, Waterbury and Hartford were magnets for European immigrants. The largest groups comprised Italian American, and Polish American, and other Eastern Europeans. They brought much needed unskilled labor and Catholicism to a historically Protestant state. A significant number of Jewish immigrants also arrived in this period due to an 1843 change in the law. Connecticut's population was almost 30% immigrant by 1910.
Not everyone welcomed the new immigrants and the change in the state's ethnic and religious makeup. The Ku Klux Klan had a following among some in Connecticut after it was reorganized in Georgia in 1915. It preached a doctrine of Protestant control of America and wanted to keep down blacks, Jews and Catholics. The Klan enjoyed only a brief period of popularity in the state, but it had a peak of 15,000 members in 1925. The group was most active in New Haven, New Britain and Stamford, which all had large Catholic populations. By 1926, the Klan leadership was divided, and it lost strength, although it continued to maintain small, local branches for years afterward in Stamford, Bridgeport, Darien, Greenwich and Norwalk. The Klan has since disappeared from the state.
Depression and War Years
With rising unemployment in both urban and rural areas, Connecticut Democrats saw their chance to return to power. The hero of the movement was Yale English professor Governor Wilbur Lucius Cross (1931–1939), who emulated much of Franklin D. Roosevelt's New Deal policies by creating new public services and instituted a minimum wage. The Merritt Parkway was constructed in this period.
However, in 1938, the Democratic Party was wracked by controversy, which quickly allowed the Republicans to gain control once again, with Governor Raymond E. Baldwin. Connecticut became a highly competitive, two-party state.
The lingering Depression soon gave way to unparalleled opportunity with the United States involvement in World War II (1941–1945). Roosevelt's call for America to be the Arsenal of Democracy led to remarkable growth in munition-related industries, such as airplane engines, radio, radar, proximity fuzes, rifles, and a thousand other products. Pratt and Whitney made airplane engines, Cheney sewed silk parachutes, and Electric Boat built submarines. This was coupled with traditional manufacturing including guns, ships, uniforms, munitions, and artillery. Connecticut manufactured 4.1 percent of total United States military armaments produced during World War II, ranking ninth among the 48 states. Ken Burns focused on Waterbury's munitions production in his 2007 miniseries The War. Although most munitions production ended in 1945, high tech electronics and airplane parts continued.
Cold War Years
In the Cold War years, Connecticut's suburbs thrived while its cities struggled. Connecticut built the first nuclear-powered submarine, the USS Nautilus (SSN-571) and other essential weapons for The Pentagon. The increased job market gave the state the highest per capita income at the beginning of the 1960s. The increased standard of living could be seen in the various suburban neighborhoods that began to develop outside major cities. Construction of major highways such as the Connecticut Turnpike caused former small towns to become locations for large-scale development, a trend that continues to this day.
However, all of these developments also led to the economic downfall of many of Connecticut's cities, many of which remain dotted with abandoned mills and other broken-down buildings. During this time, Connecticut's cities saw major growth in the African American and Latino populations. African Americans and Latinos inherited urban spaces that were no longer a high priority for the state or private industry, and by the 1980s crime and urban blight were major issues. In fact, the poor conditions that many inhabited were cause for militant movements that pushed for the gentrification of ghettos and the desegregation of the school system. In 1987, Hartford, became the first American city to elect an African-American woman as mayor, Carrie Saxon Perry.
Connecticut business thrived until the end of the 1980s, with many well-known corporations moving to Fairfield County, including General Electric, American Brands, and Union Carbide. The state also benefited from the defense buildup initiated by Ronald Reagan, due to such major employers as Electric Boat shipyards, Sikorsky helicopters, and Pratt & Whitney jet engines.
The late 20th century
Connecticut's dependence on the defense industry posed an economic challenge at the end of the Cold War. The resulting budget crisis helped elect Lowell Weicker as Governor on a third party ticket in 1990. Weicker's remedy, a state income tax, proved effective in balancing the budget but politically unpopular, as Weicker retired after a single term.
With newly "reconquered" land, the Pequots initiated plans for the construction of a multi-million dollar casino complex to be built on reservation land. The Foxwoods Casino was completed in 1992 and the enormous revenue it received made the Mashantucket Pequot Reservation one of the wealthiest in the country. With the newfound money, great educational and cultural initiatives were carried out, including the construction of the Mashantucket Pequot Museum and Research Center. The Mohegan Reservation gained political recognition shortly thereafter and, in 1994, opened another successful casino (Mohegan Sun) near the town of Uncasville. The success of casino gambling helped shift the state's economy away from manufacturing to entertainment, such as ESPN, financial services, including hedge funds and pharmaceutical firms such as Pfizer.
21st century
In the terrorist attacks of September 11, 2001, 65 state residents were killed. The vast majority were Fairfield County residents who were working in the World Trade Center. Greenwich lost 12 residents, Stamford and Norwalk each lost nine and Darien lost six. A state memorial was later set up at Sherwood Island State Park in Westport. The New York City skyline can be seen from the park.
A number of political scandals rocked Connecticut in the early 21st century. These included the 2003 removal from office of the mayors of Bridgeport, Joseph P. Ganim on 16 corruption charges, as well as Waterbury mayor Philip A. Giordano, who was charged with 18 counts of sexual abuse of two girls.
In 2004, Governor John G. Rowland resigned during a corruption investigation. Rowland later plead guilty to federal charges, and his successor M. Jodi Rell, focused her administration on reforms in the wake of the Rowland scandal.
In April 2005, Connecticut passed a law which grants all rights of marriage to same-sex couples. However, the law required that such unions be called "civil unions", and that the title of marriage be limited to those unions whose parties are of the opposite sex. The state was the first to pass a law permitting civil unions without a prior court proceeding. In October 2008, the Supreme Court of Connecticut ordered same-sex marriage legalized.
The state's criminal justice system also dealt with the first execution in the state since 1960, the 2005 execution of serial killer Michael Ross and was rocked by the July 2007 home invasion murders in Cheshire. As the accused perpetrators of the Petit murders were out on parole, Governor M. Jodi Rell promised a full investigation into the state's criminal justice policies.
On April 11, 2012 the State House of Representatives voted to end the state's rarely enforced death penalty; the State Senate having previously passed the measure on April 5. Governor Daniel Malloy announced that "when it gets to my desk I will sign it". Eleven inmates were on death row at that time, including the two men convicted of the July 2007 Cheshire, Connecticut, home invasion murders. Controversy exists both in that the legislation is not retroactive and does not commute their sentences and that the repeal is against the majority view of the state's citizens, as 62% are for retaining it.
On December 14, 2012, Adam Lanza shot and killed 26 people, including 20 children and 6 staff, at Sandy Hook Elementary School in the Sandy Hook village of Newtown, Connecticut, and then killed himself. It was the second-deadliest mass shooting in U.S. history, after the 2007 Virginia Tech massacre.
See also
- Adams, James Truslow. The Founding of New England (1921)
- Adams, James Truslow. Revolutionary New England, 1691–1776 (1923)
- Adams, James Truslow. New England in the Republic, 1776–1850 (1926)
- Andrews, Charles M. The Fathers of New England: A Chronicle of the Puritan Commonwealths (1919)
- Axtell, James, ed. The American People in Colonial New England (1973)
- Arthur R. Bauman, The Historical Background That Lead to the Expansion into the Connecticut Western Reserve (2003)
- Black, John D. The rural economy of New England: a regional study (1950
- Brewer, Daniel Chauncey. Conquest of New England by the Immigrant (1926)
- Conforti, Joseph A. Imagining New England: Explorations of Regional Identity from the Pilgrims to the Mid-Twentieth Century (2001)
- Dwight, Timothy. Travels Through New England and New York (circa 1800) 4 vol. (1969) Online at: vol 1; vol 2; vol 3; vol 4
- Grant, Charles S. Democracy in the Connecticut Frontier Town of Kent (1970)
- Hall, Donald, ed. Encyclopedia of New England (2005)
- Karlsen, Carol F. The Devil in the Shape of a Woman: Witchcraft in Colonial New England (1998)
- McPhetres, S. A. A political manual for the campaign of 1868, for use in the New England states, containing the population and latest election returns of every town (1868)
- Morse, Jarvis M. The Rise of Liberalism in Connecticut, 1828–1850 (1933)
- Palfrey, John Gorham. History of New England (5 vol 1859–90)
- Peters, Samuel. General History of Connecticut (1989)
- Purcell, Richard J. Connecticut in Transition: 1775–1818 (1963)
- Steiner, Bernard C. History of Slavery in Connecticut (1893)
- Taylor, Robert Joseph. Colonial Connecticut: A History (1979)
- Warshauer, Matthew. Connecticut in the American Civil War: Slavery, Sacrifice, and Survival (Wesleyan University Press, 2011) 309 pages;
- Williams, Stanley Thomas. The Literature of Connecticut (1936)
- Zimmerman, Joseph F. New England Town Meeting: Democracy in Action (1999)
- Matthews, Jim. "The Mysterious Cannon". Guilford Keeping Society. Retrieved 2011-10-16.
- Richard J. Purcell, Connecticut in Transition: 1775–1818 1963. p. 190.
- Noble E. Cunningham, Jr. The Jeffersonian Republicans in Power: Party Operations 1801–1809 (1963) p 129
- Stephanie Reitz (2009-11-23). "Group tries to preserve 2 historic Conn. homes". Associate Press (Boston Globe). Retrieved 2010-08-02.
- Eleanor Charles (1996-04-07). "In the Region/Connecticut;15 Synagogues Gain National Landmark Status". New York Times. Retrieved 2010-07-31. "Jews who may desire to unite and form religious societies shall have the same rights, powers and privileges which are given to Christians of every denomination."
- DiGiovanni, the Rev. (now Monsignor) Stephen M., The Catholic Church in Fairfield County: 1666–1961, 1987, William Mulvey Inc., New Canaan, Chapter II: The New Catholic Immigrants, 1880–1930; subchapter: "The True American: White, Protestant, Non-Alcoholic," pp. 81–82; DiGiovanni, in turn, cites (Footnote 209, page 258) Jackson, Kenneth T., The Ku Klux Klan in the City, 1915–1930 (New York, 1981), p. 239
- DiGiovanni, the Rev. (now Monsignor) Stephen M., The Catholic Church in Fairfield County: 1666–1961, 1987, William Mulvey Inc., New Canaan, Chapter II: The New Catholic Immigrants, 1880–1930; subchapter: "The True American: White, Protestant, Non-Alcoholic," p. 82; DiGiovanni, in turn, cites (Footnote 210, page 258) Chalmers, David A., Hooded Americanism, The History of the Ku Klux Klan (New York, 1981), p. 268
- Peck, Merton J. & Scherer, Frederic M. The Weapons Acquisition Process: An Economic Analysis (1962) Harvard Business School p.111
- Associated Press listing as it appeared in The Advocate of Stamford, September 12, 2006, page A4
- Von Zielbauer, Paul (March 20, 2003). "Bridgeport Mayor Convicted On 16 Charges of Corruption". The New York Times.
- Von Zielbauer, Paul (March 22, 2003). "Ex-Mayor in Sex Trial Opens Door to Bribery Questions". The New York Times.
- Topic Galleries – Courant.com
- Miguel Llanos (December 14, 2012). "Authorities ID gunman who killed 27 in elementary school massacre". NBC News. Associated Press. Retrieved December 14, 2012.
- News, BBC. "28 dead in school shooting". BBC News. Retrieved December 14, 2012.
- Associated Press Official: 27 dead in Conn. school shooting
- http://www.cbsnews.com/2718-201_162-1950/cbs-news-live-video/. Missing or empty
- Benjamin Trumbull. History of Connecticut. 1898.
- Changing Connecticut, 1634 – 1980
- THE CONVENTION TROOPS IN CONNECTICUT
- Constitutional History of Connecticut
- Slavery in Connecticut
- The Tories of Connecticut
- Connecticut town histories
- Connecticut Radio History
- Brief historical geography of Connecticut towns
- Connecticut's "Susquehannah Settlers"
- Firelands Museum and Research Center
- The Firelands
- Research Guide to Connecticut's "Western Lands" or "Western Reserve"
- The Western Reserve Historical Society
- U.S. Census Bureau. "Census Regions and Divisions of the United States" (PDF). Retrieved May 11, 2005.
- "History of Connecticut's Towns" | http://en.wikipedia.org/wiki/History_of_Connecticut | 13 |
43 | For the first time in human history, atmospheric carbon dioxide levels passed 400 parts per million (ppm) of carbon dioxide at the historic Mauna Loa Observatory in Hawaii. This is the same location where Scripps Institution of Oceanography researcher Charles David Keeling first established the “Keeling Curve,” a famous graph showing that atmospheric carbon dioxide concentrations are increasing rapidly in the atmosphere. CO2 was around 280 ppm before the Industrial Revolution, when humans first began releasing large amounts of CO2 to the atmosphere by burning fossil fuels. On May 9, the reading was a startling 400.08 ppm for a 24-hour period. But without the help of the oceans, this number would already be much higher.
The oceans are heating up, and marine ecosystems are changing because of it. Long before climate scientists realized the extent of impacts from carbon dioxide emissions, ocean scientists were taking simple temperature readings. Now those readings are off the charts, showing an ocean thrown out of balance from human-caused climate change. Sea surface temperatures hit a 150 year high off the U.S. East Coast from Maine to North Carolina during 2012.
These abnormally high temperatures are fundamentally altering marine ecosystems, from the abundance of plankton to the movement of fish and whales. Many marine species have specific time periods for spawning, migration, and birthing based on temperature signals and availability of prey. Kevin Friedland, a scientist in NOAA’s Northeast Fisheries Science Center’s Ecosystem Assessment Program, said “Changes in ocean temperatures and the timing and strength of spring and fall plankton blooms could affect the biological clocks of many marine species, which spawn at specific times of the year based on environmental cues like water temperature.”
Last summer I had the amazing opportunity to be on board the U.S. Coast Guard Icebreaker Healy, in partnership with N.A.S.A.’s ICESCAPE mission to study the effects of ocean acidification on phytoplankton communities in the Arctic Ocean. We collected thousands of water samples and ice cores in the Chukchi and Beaufort Seas.
While in the northern reaches of the Chukchi Sea, we discovered large “blooms” of phytoplankton under the ice. It had previously been assumed that sea ice blocked the sunlight necessary for the growth of marine plants. But the ice acts like a greenhouse roof and magnifies the light under the ice, creating a perfect breeding ground for the microscopic creatures. Phytoplankton play an important role in the ocean, without which our world would be drastically different.
Phytoplankton take CO2 out of the water and release oxygen, almost as much as terrestrial plants do. The ecological consequences of the bloom are not yet fully understood, but because they are the base of the entire food chain in the oceans, this was a monumental discovery that will shape our understanding of the Arctic ecosystem in the coming years.
The Arctic is one of the last truly wild places on our planet, where walruses, polar bears, and seals out-number humans, and raised their heads in wonderment as we walked along the ice and trespassed into their domain. However, their undeveloped home is currently in grave danger. The sea ice that they depend on is rapidly disappearing as the Arctic is dramatically altered by global warming.
Some predictions are as grave as a seasonally ice-free Arctic by 2050. Drilling for oil in the Arctic presents its own host of problems, most dangerous of which is that there is no proven way to clean up spilled oil in icy conditions. An oil spill in the Arctic could be devastating to the phytoplankton and thereby disrupt the entire ecosystem. The full effects of such a catastrophe cannot be fully evaluated without better information about the ocean, and we should not be so hasty to drill until we have that basic understanding.
Unless we take drastic action to curb our emissions of CO2 and prevent drilling in the absence of basic science and preparedness, we may see not only an ice-free Arctic in our lifetimes, but also an Arctic ecosystem that is drastically altered.
Editor's note: This is a guest contribution by Oceana supporter Lauren Linzer, who lives on the Spanish island of Lanzarote, one of the Canary Islands, which are just off the west coast of Africa.
Along with many other nations around the world, Spain has been desperately searching for solutions to relieve the increasing financial woes the country is facing.
With a significant portion of its oil supply being imported and oil prices skyrocketing, attention to cutting down on this lofty expense has turned toward a tempting opportunity to drill for oil offshore in their own territory.
The large Spanish petrol firm, REPSOL, has declared an interest in surveying underwater land dangerously close to the Spanish Canary Islands of Lanzarote and Fuerteventura. This would, in theory, cut down significantly on spending for the struggling country, providing a desperately needed financial boost.
But are the grave ecological repercussions worth the investment? There is much debate around the world about this controversial subject; but on the island of Lanzarote, it is clear that this will not be a welcome move.
Last week, protesters from around the island gathered in the capital city of Arrecife to demonstrate their opposition to the exploration for underwater oil. With their faces painted black and picket signs in hand, an estimated 22,000 people (almost one fifth of the island’s population) walked from one side of the city to the other, chanting passionately and marching to the beat of drums that lead the pack. Late into the night, locals of all ages and occupations joined together to express their dire concerns.
Besides the massive eyesore that the site of the drilling will introduce off the east coast, the ripple effects to islanders will have a devastating impact. The most obvious industry that will take a serious hit will be tourism, which the island depends on heavily. Most of the large touristic destinations are on the eastern shore due to the year-round excellent weather and plethora of picturesque beaches. But with the introduction of REPSOL’s towers a mere 23 kilometers (14 miles) from the island’s most populated beaches, the natural purity and ambient tranquility that draws so many European travelers will be a thing of the past.
Editor's note: This post by Oceana CEO Andy Sharpless was originally posted last May on Politico.com. We think it couldn't be more relevant right now, especially considering that many media outlets are now making similar arguments to the one we've been making since last year - that gas prices aren't tied to offshore drilling.
Why do we take terrible risks to drill for oil in the Gulf of Mexico and elsewhere along our coasts?
Most people would say we drill to protect ourselves from big fluctuations in gasoline prices that are caused by major upheavals in the Middle East.
Their argument is that the more oil we can produce domestically, the lower the price we’ll pay at the pump. It’s not that they like the sight of oil wells off our beaches. The main reason they argue for more offshore oil drilling is they think it will save money — especially since gas prices approached $4 a gallon recently. (See: A chart of U.S. gas prices here.)
Andy Sharpless is the CEO of Oceana.
I have a dramatic update for you on our campaign to stop offshore drilling in Belize.
As I reported to you several weeks ago, the government shockingly rejected 8,000 of the 20,000 signatures we collected against offshore drilling, citing poor penmanship as a primary reason.
The 20,000 signatures we collected should have been more than plenty to trigger a national referendum on offshore drilling, but since the government refused to comply, we held our own referendum last week – a people’s referendum.
And the results were astounding.
Nearly 30,000 registered Belizeans – that’s almost 20% of the country’s voting population – cast a ballot on the issue of offshore drilling. The results? 96% to 4% voted against offshore drilling. We think this is irrefutable evidence that the Belizean government needs to act responsibly, and either end plans to allow drilling in its reef, or allow a public referendum to determine the national policy.
Oceana is the leading voice in Belize against offshore drilling. Belize is home to the magnificent Belize Barrier Reef, a UNESCO World Heritage Site, which we simply cannot sacrifice for oil.
I’ll keep you posted as this important story continues to unfold.
Andy Sharpless is the CEO of Oceana.
If you watched this week’s State of the Union address, you may have heard President Obama announce that he was opening 75 percent of our “potential offshore oil and gas resources.”
The good news is that this isn’t news; it’s simply a reiteration of the administration’s current five-year drilling plan that fully protects the Atlantic and Pacific coasts, as well as much of the U.S. Arctic. The bad news, however, is that plan expands offshore drilling to include much more of the Gulf of Mexico than ever before – and worse yet, some of the Arctic. It’s as if the massive 2010 spill never happened.
In other good news, the President expressed his wish to reduce subsidies for oil companies. The oil companies receive about $10 billion a year in tax breaks, and the Obama administration has proposed cutting $4 billion.
I applaud the President’s commitment to reducing subsidies for the big oil companies, although I wish he would go further and eliminate them completely.
Unfortunately, the State of the Union address, as well as this week’s Republican primary debate in Florida, reiterated that our political leaders still fail to grasp a basic economic fact: that increasing our domestic supply of oil will not lower our prices at the gas pump.
Oil is a global commodity, and prices are set on a world market. Multinational companies who drill for oil – like Shell, B.P. and Exxon – will sell to the highest bidder. That may be the U.S. It may just as well be India or China.
As we learned during the 2010 Gulf of Mexico oil disaster, there’s more at stake. National Journal writer Beth Reinhard asked the right question at Monday’s Republican debate when she noted drilling in Florida will create at most 5,000 jobs, while an oil spill threatens the 1 million jobs that depend upon tourism, which contributes $40 billion each year to Florida’s economy.
That’s a high price to pay to help oil companies continue to make record profits. And yet Rick Santorum, on the receiving end of her question, reiterated his support for more domestic drilling.
Unfortunately, oil companies are powerful players in the election season. They dole out enormous contributions to the candidates, which may explain why we see misinformation on both sides of the political aisle.
Here at Oceana, we’ll stick to the facts. More offshore drilling won’t lower your price at the pump, and we’ll continue to fight to protect our beaches and seafood from dirty and dangerous drilling.
After the Gulf oil spill happened, people demanded numbers. They wanted to know animal mortality numbers and dollar signs to understand the worst environmental disaster in our nation’s history.
The problem is that the extent of this spill was so huge and so many animals and people were affected that it’s hard to quantify. But some recent numbers help show how widespread the impacts have been.
So far BP has set aside $20 billion for spill impacts, and it has just been released that they paid out $5 billion of that amount in damages to over 200,000 people in the last year, with an additional $1.5 billion going to cleanup and restoration.
Many more people are claiming damages, with a total of close to 1 million claims being processed from people in all 50 states and 36 different nations, with thousands more claims coming in each week.
How could a spill in the Gulf possibly affect over a million people in such far reaching places? The answer is that the Gulf of Mexico isn’t just an oil and gas depot, it is used for many activities besides drilling that employ thousands of people in fishing and tourism related jobs. As a result, the economic impacts of the spill have been felt around the world.
This is the fifth in a series of posts about this year’s Ocean Hero finalists.
Maria D’Orsogna is a physics and math professor in California, but in her spare time, she has been fighting offshore drilling in Italy, where she spent 10 years of her childhood. She has even earned the nickname “Erin Brockovich of Abruzzo” for her efforts to rally the public and officials to end drilling in the region.
Abruzzo, which may be familiar to you from Montepulciano d’Abruzzo wine (a bottle of which I have sitting at home), is a primarily agricultural region east of Rome. The Adriatic Sea is nearby, along with a marine reserve (Torre del Cerrano), a Coastal National Park (Parco Nazionale della Costa Teatina) and several regional reserves, such as Punta Aderci, where dolphins are often spotted.
Maria’s activism started in 2007, when she discovered that the oil company ENI planned to drill in the coastal town of Ortona, Abruzzo. The company would uproot century-old wineries to build a refinery and a 7km pipeline to the sea.
Maria reports that there was very little information about the industry’s drilling plans, nor analysis on what it could mean for the region’s agriculture or fishing industries. At the time, Italy had no laws regulating offshore drilling.
While fighting the onshore refinery, which was ultimately defeated, Maria said via e-mail, “the attack on the sea began. I had to get involved.”
Oceana was joined by longtime supporters Kate Walsh ("Private Practice" and "Grey's Anatomy") and Aaron Peirsol (gold medal-winning swimmer) in Washington, D.C. today to remember the one-year anniversary of the Gulf of Mexico oil spill. We were also joined by Patty Whitney, a Louisiana resident-turned-activist whose home was affected by last year's disaster.
Along with campaign director Jackie Savitz, and a slew of energetic volunteers, the group served to remind us that offshore drilling is never safe - and that an oil spill could happen anywhere. Check out this slideshow of images from today's event. | http://oceana.org/en/category/blog-free-tags/oil | 13 |
14 | Rutherford's Gold Foil Experiment
Rutherford started his scientific career with much success in local schools leading to a scholarship to Nelson College. After achieving more academic honors at Nelson College, Rutherford moved on to Cambridge University's Cavendish laboratory. There he was lead by his mentor J. J. Thomson convinced him to study radiation. By 1889 Rutherford was ready to earn a living and sought a job. With Thomson's recommendation McGill University in Montreal accepted him as a professor of chemistry. Upon performing many experiments and finding new discoveries at McGill university, Rutherford was rewarded the nobel prize for chemistry. In 1907 he succeded Arthur Schuster at the University of Manchester. He began persuing alpha particles in 1908. With the help of Geiger he found the number of alpha particles emitted per second by a gram of radium. He was also able to confirm that alpha particles cause a faint but discrete flash when striking luminescent zinc sulfide screen. These great accomplishments are all overshadowed by Rutherford's famous Gold Foil experiment which revolutionized the atomic model.
This experiment was Rutherford's most notable achievement. It not only disproved Thomson's atomic model but also paved the way for such discoveries as the atomic bomb and nuclear power. The atomic model he concluded after the findings of his Gold Foil experiment have yet to be disproven. The following paragraphs will explain the significance of the Gold Foil Experiment as well as how the experiment contradicted Thomson's atomis model.
Rutherford began his experiment with the philosophy of trying «any dam fool experiment» on the chance it might work.1 With this in mind he set out to disprove the current atomic model. In 1909 he and his partner, Geiger, decided Ernest Marsden, a student of the University of Manchester, was ready for a real research project.2 This experiment's apparatus consisted of Polonium in a lead box emitting alpha particles towards a gold foil. The foil was surrounded by a luminescent zinc sulfide screen to detect where the alpha particles went after contacting the gold atoms. Because of Thomson's atomic model this experiment did not seem worthwhile for it predicted all the alpha particles would go straight through the foil. Despite however unlikely it may have seemed for the alpha particles to bounce off the gold atoms, they did. Leaving Rutherford to say, «It was almost as incredible as if you fired a fifteen-inch shell at a piece of tissue
paper and it came back and hit you.» Soon he came up with a new atomic model based on the results of this experiment. Nevertheless his findings and the new atomic model was mainly ignored by the scientific community at the time.
In spite of the views of other scientists, Rutherford's 1911 atomic model was backed by scientific proof of his Gold Foil Experiment. When he approched the experiment he respected and agreed with J. J. Thomson's, his friend and mentor, atomic theory. This theory proposed that the electrons where evenly distributed throughout an atom. Since an alpha paritcle is 8,000 times as heavy as an electron, one electron could not deflect an alpha particle at an obtuse angle. Applying Thomson's model, a passing particle could not hit more than one elctron at a time; therefore, all of the alpha particles should have passed straight through the gold foil. This was not the case - a notable few alpha particles reflected of the gold atoms back towards the polonium. Hence the mass of an atom must be condessed in consentrated core. Otherwise the mass of the alpha particles would be greated than any part of an atom they hit. As Rutherford put it:
«The alpha projectile changed course in a
Single encounter with a target atom. But
For this to occur, the forces of electrical
Repulsion had to be concentrated in a region
Of 10-13cm whereas the atom was known to
He went on to say that this meant most of the atom was empty space with a small dense core. Rutherford pondered for much time before anouncing in 1911 that he had made a new atomic model-this one with a condensed core (which he named the «nucleus») and electrons orbitting this core. As stated earlier, this new atomic model was not opposed but originally ignored by most of the scientific community.
Rutherford's experiment shows how scientists must never just accept the current theroies and models but rather they must constently be put to new tests and experiments. Rutherford was truly one of the most successful scientists of his time and yet his most renowned experiment was done expecting no profound results. Currently, chemists are still realizing the uses for atomic energy thanks to early findings from scientists such as Rutherford.
Please do not pass this sample essay as your own, otherwise you will be accused of plagiarism. Our writers can write any custom essay for you!
Atomic Bomb Sample essay topic, essay writing: Atomic Bomb - 2219 words
.. s. On January 1, 1896, his first paper on this subject was published. Many found it unbelievable; photographs of bones inside of hands and bullets that were lodged within bodies provided the proof. The implications for medicine intrude the mind almost unbiddingly. Rontgen, receiving the Plutonium Plutonium is a radioactive metallic element. Although it is occasionally found in nature, mostly all of our plutonium is produced artificially in a lab. The official chemical symbol for plutonium is Pu, coming from its first and third letters. Its atomic number is ninety-four. Plutonium is able to maintain its solid state until very high Miniresearch Mini - Research
ELECTRON - In 1897, Sir J. J. Thomas, an English physicist, measured the deflection of cathode-ray particles in magnetic and electrical fields. As a result he found the ratio of the charge, e, to the mass, m, of the cathode-ray particles. He found e/m identical to those particles irrespective of the metal the Gold in grendel Gold has many different uses. In John Gardner’s novel Grendel, it is used as a motif to symbolize different aspects of a character. Though it has a constant meaning throughout the novel, it also differs according to each character. Gardner uses gold as a symbol of majesty as well as protection, greed and power throughout Another Werner Heisenberg essay One cannot fully appreciate the work of Werner Heisenberg unless one Examines his contributions in the context of the time in which he lived. Werner Karl Heisenberg was born in Wuerzburg, Germany, on December 5, 1901, and grew up In academic surroundings, in a household devoted to the humanities. His father Was a professor at | http://www.mannmuseum.com/rutherford-039-s-gold-foil-experiment/ | 13 |
22 | 1.the constitution written at the Constitutional Convention in Philadelphia in 1787 and subsequently ratified by the original thirteen states
fundamental law; organic law; constitution[ClasseHyper.]
United States Constitution (n.)
|United States Constitution|
Page one of the original copy of the Constitution
|Created||September 17, 1787|
|Ratified||June 21, 1788|
|Signatories||39 of the 55 delegates|
|Purpose||To replace the Articles of Confederation (1777)|
|United States of America|
This article is part of the series:
United States Constitution
|Original text of the Constitution|
|Amendments to the Constitution|
|Full text of the Constitution|
Other countries · Law Portal
This template is part of the series:
Politics and government of
the United States
Other countries · Atlas
The Constitution of the United States is the supreme law of the United States of America. The first three Articles of the Constitution establish the rules and separate powers of the three branches of the federal government: a legislature, the bicameral Congress; an executive branch led by the President; and a federal judiciary headed by the Supreme Court. The last four Articles frame the principle of federalism. The Tenth Amendment confirms its federal characteristics.
The Constitution was adopted on September 17, 1787, by the Constitutional Convention in Philadelphia, Pennsylvania, and ratified by conventions in eleven states. It went into effect on March 4, 1789. The first ten amendments are known as the Bill of Rights. The Constitution has been amended seventeen times (for a total of 27 amendments) and its principles are applied in courts of law by judicial review.
The Constitution guides American society in law and political culture. It is the oldest charter of supreme law in continuous use, and it influenced later international figures establishing national constitutions. Recent impulses for reform center on concerns for extending democracy and balancing the Federal budget.
The Articles of Confederation and Perpetual Union were the first constitution of the United States of America. The problem with the United States government under the Articles of Confederation was, in the words of George Washington, "no money".
Congress could print money, but by 1786, the money was useless. Congress could borrow money, but could not pay it back. No state paid all of their U.S. taxes; Georgia paid nothing. Some few paid an amount equal to interest on the national debt owed to their citizens, but no more. No interest was paid on debt owed foreign governments. By 1786, the United States would default on the dates the principal came due.
The United States could not defend itself as an independent nation in the world of 1787. Most of the U.S. troops in the 625-man U.S. Army were deployed facing British forts on American soil. The troops had not been paid; some were deserting and the remainder threatened mutiny.Spain closed New Orleans to American commerce. The United States protested, to no effect. The Barbary Pirates began seizing American commercial ships. The Treasury had no funds to pay the pirates' extortion demands. The Congress had no more credit if another military crisis had required action.
The states were proving inadequate to the requirements of sovereignty in a confederation. Although the Treaty of Paris (1783) had been made between Great Britain and the United States with each state named individually, individual states violated it. New York and South Carolina repeatedly prosecuted Loyalists for wartime activity and redistributed their lands over the protests of both Great Britain and the Articles Congress.
In Massachusetts during Shays' Rebellion, Congress had no money to support a constituent state, nor could Massachusetts pay for its own internal defense. General Benjamin Lincoln had to raise funds among Boston merchants to pay for a volunteer army. During the upcoming Convention, James Madison angrily questioned whether the Articles of Confederation was a compact or even government. Connecticut paid nothing and "positively refused" to pay U.S. assessments for two years. A rumor had it that a "seditious party" of New York legislators had opened communication with the Viceroy of Canada. To the south, the British were said to be funding the Creek Indian raids; Savannah was fortified, the State of Georgia under martial law.
Congress was paralyzed. It could do nothing significant without nine states, and some legislative business required all thirteen. When only one member of a state was on the floor, then that state’s vote did not count. If a delegation were evenly divided, no vote counted towards the nine-count requirement. Individual state legislatures independently laid embargoes, negotiated directly with foreigners, raised armies and made war, all violating the letter and the spirit of the “Articles of Confederation and Perpetual Union”. The Articles Congress had "virtually ceased trying to govern." The vision of a "respectable nation" among nations seemed to be fading in the eyes of revolutionaries such as George Washington, Benjamin Franklin and Rufus King. The dream of a republic, a nation without hereditary rulers, with power derived from the people in frequent elections, was in doubt.
On February 21, 1787, the Articles Congress called a convention of state delegates at Philadelphia to propose a plan of government. Unlike earlier attempts, the convention was not meant for new laws or piecemeal alterations, but for the “sole and express purpose of revising the Articles of Confederation”. The convention was not limited to commerce; rather, it was intended to “render the federal constitution adequate to the exigencies of government and the preservation of the Union." The proposal might take effect when approved by Congress and the states.
On the appointed day, May 14, only the Virginia and Pennsylvania delegations were present. A quorum of seven states met on May 25. Eventually twelve states were represented; 74 delegates were named, 55 attended and 39 signed. The delegates arrived with backgrounds in local and state government and Congress. They were judges and merchants, war veterans and revolutionary patriots, native-born and immigrant, establishment easterners and westward-looking adventurers. The participating delegates are honored as the Constitution’s “Framers”.
The Constitutional Convention began deliberations on May 25, 1787. The delegates were generally convinced that an effective central government with a wide range of enforceable powers must replace the weaker Congress established by the Articles of Confederation. The high quality of the delegates to the convention was remarkable. As Thomas Jefferson in Paris wrote to John Adams in London, "It really is an assembly of demigods."
Delegates used two streams of intellectual tradition, and any one delegate could be found using both or a mixture depending on the subject under discussion, foreign affairs or the economy, national government or federal relationships among the states. The Virginia Plan recommended a consolidated national government, generally favoring the big population states. It used the philosophy of John Locke to rely on consent of the governed, Montesquieu for divided government, and Edward Coke emphasizing equity in outcomes. The New Jersey Plan generally favored the small population states, using the philosophy of English Whigs such as Edmund Burke to rely on received procedure, and William Blackstone emphasizing sovereignty of the legislature.
The Convention devolved into a “Committee of the Whole” to consider the fifteen propositions of the Virginia Plan in their numerical order. These discussions continued until June 13, when the Virginia resolutions in amended form were reported out of committee.
All agreed to a republican form of government grounded in representing the people in the states. For the legislature, two issues were to be decided, (1) how the votes were to be allocated among the states in the Congress, and (2) how the representatives should be elected. The question was settled by the Connecticut Compromise or "Great Compromise". In the House, state power was to be based on population and the people would vote. In the Senate, state power was to be based on state legislature election, two Senators generally to be elected by different state legislatures to better reflect the long term interests of the people living in each state.
The Great Compromise ended the stalemate between “patriots” and “nationalists”, leading to numerous other compromises in a spirit of accommodation. There were sectional interests to be balanced by the three-fifths compromise; reconciliation on Presidential term, powers, and method of selection; jurisdiction the federal judiciary. Debates on the Virginia resolutions continued. The 15 original resolutions had been expanded into 23.
On July 24, a committee of five (John Rutledge (SC), Edmund Randolph (VA), Nathaniel Gorham (MA), Oliver Ellsworth (CT), and James Wilson (PA) was elected to draft a detailed constitution. The Convention adjourned from July 26 to August 6 to await the report of this "Committee of Detail". Overall, the report of the committee conformed to the resolutions adopted by the Convention, adding some elements.
From August 6 to September 10, the report of the committee of detail was discussed, section-by-section, and clause-by-clause. Details were attended to, further compromises were effected. Toward the close of these discussions, on September 8, another committee of five (William Samuel Johnson (CT), Alexander Hamilton (NY), Gouverneur Morris (PA), James Madison (VA), and Rufus King (MA). was appointed “to revise the style of and arrange the articles which had been agreed to by the house.” On Wednesday, September 12, the report of the "committee of style" was ordered printed for the convenience of the delegates. For three days, the Convention compared this report with the proceedings of the Convention. The Constitution was then ordered engrossed on Saturday, September 15 by Jacob Shallus. The Convention met on Monday, September 17, for its final session. Several of the delegates were disappointed in the result, a makeshift series of unfortunate compromises. Some delegates left before the ceremony, three remaining refused to sign. Of the thirty-nine signers, Benjamin Franklin summed up addressing the Convention, "There are several parts of this Constitution which I do not at present approve, but I am not sure I shall never approve them." He would accept the Constitution, "because I expect no better and because I am not sure that it is not the best."
The advocates of the Constitution were anxious to obtain the unanimous support of all twelve states represented in Convention. Their agreed to formula was, “Done in Convention, by the unanimous consent of the States present.” George Washington noted in his diary that night, the proposal was agreed to by eleven state delegations and the lone Mr. Hamilton for New York. Transmitted to the Articles Congress then sitting in New York City, the Constitution was forwarded to the states by Congress recommending the ratification process outlined in the Constitution. Each state legislature was to call elections for a “Federal Convention” to ratify the Constitution. They expanded the franchise beyond the Constitutional requirement to more nearly embrace “the people”. Eleven of the thirteen ratified to begin, all thirteen unanimously did so a year later. The Articles Congress certified eleven states beginning the new government, and called the states to hold elections to begin operation. It then dissolved itself on that date, the day the first session of the First Congress began March 4, 1789 and George Washington was inaugurated as President two months later.
It was within the power of the old congress to expedite or block the ratification of the new Constitution. The document that the Philadelphia Convention presented was technically only a revision of the Articles of Confederation. But the last article of the new instrument provided that when ratified by conventions in nine states (or 2/3 at the time), it should go into effect among the States so acting.
Then followed an arduous process of ratification of the Constitution by specially constituted conventions. The need for only nine states was a controversial decision at the time, since the Articles of Confederation could only be amended by unanimous vote of all the states. However, the new Constitution was ratified by all thirteen states, with Rhode Island signing on last in May 1790.
Three members of the Convention – Madison, Gorham, and King – were also Members of Congress. They proceeded at once to New York, where Congress was in session, to placate the expected opposition. Aware of their vanishing authority, Congress, on September 28, after some debate, unanimously decided to submit the Constitution to the States for action. It made no recommendation for or against adoption.
Two parties soon developed, one in opposition, the Antifederalists, and one in support, the Federalists, of the Constitution, and the Constitution was debated, criticized, and expounded clause by clause. Hamilton, Madison, and Jay, under the name of "Publius, wrote a series of commentaries, now known as the Federalist Papers, in support of the new instrument of government; however, the primary aim of the essays was for ratification in the state of New York, at that time a hotbed of anti-federalism. These commentaries on the Constitution, written during the struggle for ratification, have been frequently cited by the Supreme Court as an authoritative contemporary interpretation of the meaning of its provisions. The closeness and bitterness of the struggle over ratification and the conferring of additional powers on the central government can scarcely be exaggerated. In some states, ratification was effected only after a bitter struggle in the state convention itself. In every state, the Federalists proved more united, and only they coordinated action between different states, as the Anti-federalists were localized and did not attempt to reach out to other states.
The Continental Congress – which still functioned at irregular intervals – passed a resolution on September 13, 1788, to put the new Constitution into operation.
|This article does not cite any references or sources. Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed. (June 2012)|
Several ideas in the Constitution were new. These were associated with the combination of consolidated government along with federal relationships with constituent states.
Both the influence of Edward Coke and William Blackstone were evident at the Convention. In his Institutes of the Laws of England, Edward Coke interpreted Magna Carta protections and rights to apply not just to nobles, but to all British subjects. In writing the Virginia Charter of 1606, he enabled the King in Parliament to give those to be born in the colonies all rights and liberties as though they were born in England. William Blackstone's Commentaries on the Laws of England were the most influential books on law in the new republic.
British political philosopher John Locke following the Glorious Revolution was a major influence expanding on the contract theory of government advanced by Thomas Hobbes. Locke advanced the principle of consent of the governed in his Two Treatises of Government. Government's duty under a social contract among the sovereign people was to serve them by protecting their rights. These basic rights were life, liberty and property.
Montesquieu, emphasized the need for balanced forces pushing against each other to prevent tyranny (reflecting the influence of Polybius's 2nd century BC treatise on the checks and balances of the Roman Republic). In his The Spirit of the Laws, Montesquieu argues that the separation of state powers should be by its service to the people's liberty: legislative, executive and judicial.
Division of power in a republic was informed by the British experience with mixed government, as well as study of republics ancient and modern. A substantial body of thought had been developed from the literature of republicanism in the United States, including work by John Adams and applied to the creation of state constitutions.
The Iroquois nations' political confederacy and democratic government under the Great Law of Peace have been credited as influences on the Articles of Confederation and the United States Constitution. Relations had long been close, as from the beginning the colonial English needed allies against New France. Prominent figures such as Thomas Jefferson in colonial Virginia and Benjamin Franklin in colonial Pennsylvania, two colonies whose territorial claims extended into Iroquois territory, were involved with leaders of the New York-based Iroquois Confederacy.
In the 1750s at the Albany Congress, Franklin called for "some kind of union" of English colonies to effectively deal with Amerindian tribes. John Rutledge (SC) quoted Iroquoian law to the Constitutional Convention, "We, the people, to form a union, to establish peace, equity, and order..."
The Iroquois experience with confederacy was both a model and a cautionary tale. Their "Grand Council" had no coercive control over the constituent members, and decentralization of authority and power had frequently plagued the Six Nations since the coming of the Europeans. The governance adopted by the Iroquois suffered from "too much democracy" and the long term independence of the Iroquois confederation suffered from intrigues within each Iroquois nation.
The 1787 United States had similar problems, with individual states making separate agreements with European and Amerindian nations apart from the Continental Congress. Without the Convention's proposed central government, the framer's feared that the fate of the confederated Articles' United States would be the same as the Iroquois Confederacy.
The United States Bill of Rights consists of the ten amendments added to the Constitution in 1791, as supporters of the Constitution had promised critics during the debates of 1788. The English Bill of Rights (1689) was an inspiration for the American Bill of Rights. Both require jury trials, contain a right to keep and bear arms, prohibit excessive bail and forbid "cruel and unusual punishments." Many liberties protected by state constitutions and the Virginia Declaration of Rights were incorporated into the Bill of Rights.
The Constitution consists of a preamble, seven original articles, twenty-seven amendments, and a paragraph certifying its enactment by the constitutional convention.
We the People of the United States, in Order to form a more perfect Union, establish Justice, insure domestic Tranquility, provide for the common defence, promote the general Welfare, and secure the Blessings of Liberty to ourselves and our Posterity, do ordain and establish this Constitution for the United States of America.
The Preamble sets out the origin, scope and purpose of the Constitution. Its origin and authority is in “We, the people of the United States”. This echoes the Declaration of Independence. “One people” dissolved their connection with another, and assumed among the powers of the earth, a sovereign nation-state. The scope of the Constitution is twofold. First, “to form a more perfect Union” than had previously existed in the “perpetual Union” of the Articles of Confederation. Second, to “secure the blessings of liberty”, which were to be enjoyed by not only the first generation, but for all who came after, “our posterity”.
This is an itemized social contract of democratic philosophy. It details how the more perfect union was to be carried out between the national government and the people. The people are to be provided (a) justice, (b) civil peace, (c) common defense, (d) those things of a general welfare that they could not provide themselves, and (e) freedom. A government of "liberty and union, now and forever", unfolds when “We” begin and establish this Constitution.[a]
Article One describes the Congress, the legislative branch of the federal government. Section 1, reads, "All legislative powers herein granted shall be vested in a Congress of the United States, which shall consist of a Senate and House of Representatives."
The article establishes the manner of election and the qualifications of members of each body. Representatives must be at least 25 years old, be a citizen of the United States for seven years, and live in the state they represent. Senators must be at least 30 years old, be a citizen for nine years, and live in the state they represent.
Article I, Section 8 enumerates the legislative powers, which include:
To make all laws which shall be necessary and proper for carrying into execution the foregoing powers, and all other powers vested by this Constitution in the government of the United States, or in any department or officer thereof.
Article I, Section 9 lists eight specific limits on congressional power.
The United States Supreme Court has interpreted the Commerce Clause and the Necessary and Proper Clause in Article One to allow Congress to enact legislation that is neither expressly listed in the enumerated power nor expressly denied in the limitations on Congress. In McCulloch v. Maryland (1819), the United States Supreme Court fell back on the strict construction of the necessary and proper clause to read that Congress had "[t]he foregoing powers and all other powers..."
Article II, Section 1 creates the presidency. The section vests the executive power in a President. The President and Vice President serve identical four-year terms. This section originally set the method of electing the President and Vice President, but this method has been superseded by the Twelfth Amendment.
Section 2 grants substantive powers to the president:
Section 2 grants and limits the president's appointment powers:
Section 3 opens by describing the president's relations with Congress:
Section 3 adds:
Section 4 provides for removal of the president and other federal officers. The president is removed on impeachment for, and conviction of, treason, bribery, or other high crimes and misdemeanors.
Article Three describes the court system (the judicial branch), including the Supreme Court. There shall be one court called the Supreme Court. The article describes the kinds of cases the court takes as original jurisdiction. Congress can create lower courts and an appeals process. Congress enacts law defining crimes and providing for punishment. Article Three also protects the right to trial by jury in all criminal cases, and defines the crime of treason.
Judicial power. Article III, Section 1 is the authority to interpret and apply the law to a particular case. It includes the power to punish, sentence, and direct future action to resolve conflicts. The Constitution outlines the U.S. judicial system. In the Judiciary Act of 1789 Congress began to fill in details. Currently, Title 28 of the U.S. Code describes judicial powers and administration.
As of the First Congress, the Supreme Court justices rode circuit to sit as panels to hear appeals from the district courts.[b] In 1891 Congress enacted a new system. District courts would have original jurisdiction. Intermediate appellate courts (circuit courts) with exclusive jurisdiction were made up of districts. These circuit courts heard regional appeals before consideration by the Supreme Court. The Supreme Court holds discretionary jurisdiction, meaning that it does not have to hear every case that is brought to it.
To enforce judicial decisions, the Constitution grants federal courts both criminal contempt and civil contempt powers. The court’s summary punishment for contempt immediately overrides all other punishments applicable to the subject party. Other implied powers include injunctive relief and the habeas corpus remedy. The Court may imprison for contumacy, bad-faith litigation, and failure to obey a writ of mandamus. Judicial power includes that granted by Acts of Congress for rules of law and punishment. Judicial power also extends to areas not covered by statute. Generally, federal courts cannot interrupt state court proceedings.
Arisings Clause. The Diversity (of Citizenship) Clause. Article III, Section 2, Clause 1. Citizens of different states are citizens of the United States. Cases arising under the laws of the United States and its treaties come under the jurisdiction of Federal courts. Cases under international maritime law and conflicting land grants of different states come under Federal courts. Cases between U.S. citizens in different states, and cases between U.S. citizens and foreign states and their citizens, come under Federal jurisdiction. The trials will be in the state where the crime was committed.
Judicial review. Article III, Section 2. U.S. courts have the power to rule legislative enactments or executive acts invalid on constitutional grounds. The Constitution is the supreme law of the land. Any court, state or federal, high or low, has the power to refuse to enforce any statute or executive order it deems repugnant to the U.S. Constitution. Two conflicting federal laws are under "pendent" jurisdiction if one presents a strict constitutional issue. Federal court jurisdiction is rare when a state Legislature enacts something as under federal jurisdiction.[c] To establish a federal system of national law, considerable effort goes into developing a spirit of comity between Federal government and states. By the doctrine of ‘Res Judicata’, federal courts give "full faith and credit" to State Courts.[d] The Supreme Court will decide Constitutional issues of state law only on a case by case basis, and only by strict Constitutional necessity, independent of state legislators motives, their policy outcomes or its national wisdom.[e]
Exceptions Clause. Article III, Section 2, Clause 2. The Supreme Court has original jurisdiction in cases about Ambassadors and other public ministers and consuls, for all cases respecting foreign nation-states.
Standing. Article III, Section 2, Clause 2. This is the rule for Federal courts to take a case. Justiciability is the standing to sue. A case cannot be hypothetical or concerning a settled issue. In the U.S. system, someone must have direct, real and substantial personal injury. The issue must be concrete and "ripe", that is, of broad enough concern in the Court’s jurisdiction that a lower court, either Federal or state, does not geographically cover all the existing cases before law. Courts following these guidelines exercise judicial restraint. Those making an exception are said to be judicial activist.[f]
Treason. Article III, Section 3. This part of the Constitution strips Congress of the Parliamentary power of changing or modifying the law of treason by simple majority statute. It's not enough to merely think treasonously; there must be an overt act of making war or materially helping those at war with the United States. Accusations must be corroborated by at least two witnesses. Congress is a political body and political disagreements routinely encountered should never be considered as treason. This allows for nonviolent resistance to the government because opposition is not a life or death proposition. However, Congress does provide for other less subversive crimes and punishments such as conspiracy.[g]
Article Four outlines the relation between the states and the relation between the federal government. In addition, it provides for such matters as admitting new states as well as border changes between the states. For instance, it requires states to give "full faith and credit" to the public acts, records, and court proceedings of the other states. Congress is permitted to regulate the manner in which proof of such acts, records, or proceedings may be admitted. The "privileges and immunities" clause prohibits state governments from discriminating against citizens of other states in favor of resident citizens (e.g., having tougher penalties for residents of Ohio convicted of crimes within Michigan).
It also establishes extradition between the states, as well as laying down a legal basis for freedom of movement and travel amongst the states. Today, this provision is sometimes taken for granted, especially by citizens who live near state borders; but in the days of the Articles of Confederation, crossing state lines was often a much more arduous and costly process. Article Four also provides for the creation and admission of new states. The Territorial Clause gives Congress the power to make rules for disposing of federal property and governing non-state territories of the United States. Finally, the fourth section of Article Four requires the United States to guarantee to each state a republican form of government, and to protect the states from invasion and violence.
Amending clause. Article V, Section 1. Article V provides for amending the supreme "law of the land". Amendment of the state Constitutions at the time of the 1787 Constitutional Convention required only a majority vote in a sitting legislature of a state, as duly elected representatives of its sovereign people. The very next session, meeting by the same authority, could likewise undo the work of any previous sitting assembly. This was not the "fundamental law" the founders such as James Madison had in mind.
Nor did they want to perpetuate the paralysis of the Articles by requiring unanimous state approval. The Articles of Confederation had proven unworkable within ten years of its employment. Between the two existing options for changing the supreme "law of the land", (a) too easy by the states, and (b) too hard by the Articles, the Constitution offered a federal balance of the national legislature and the states. Two-thirds of both houses of Congress could propose an Amendment, which can become valid "for all intents and purposes" as the Constitution, when three-fourths of the states approve.[h] No Amendment can ever take away equal State votes in the U.S. Senate unless a state first agrees to it. No amendment regarding slavery or direct taxes could be permitted until 1808. Slavery was abolished by the Thirteenth Amendment in December 1865, direct tax on income was effected by the Sixteenth Amendment in February 1913.
Incorporated Amendments. The Fourteenth Amendment is used by Federal courts to incorporate Amendments into the state constitutions as provisions to protect United States citizens. By 1968, the Court would hold that provisions of the Bill of Rights were "fundamental to the American scheme of justice". The Amendment in view by the Supreme Court was applicable to the states in their relationship to individual United States citizens in every state.
Among the Bill of Rights, Doug Linder counts the First, Second, Fourth, and Sixth Amendment as fully incorporated into State governance. Most of the Fifth Amendment is incorporated, and a single provision of the Eighth. The Third Amendment is incorporated only in the U.S. Second Circuit, the states of New York, Connecticut and New Hampshire. The Supreme Court has not determined the Constitutional issue is yet "ripe" for national application in every state. The Seventh Amendment is not incorporated. Twentieth Century Amendments use the prohibitive phrase, "neither the United States nor any State" to comprehensively incorporate the Amendment into the States at the time of its ratification into the Constitution.
Article Six establishes the Constitution, and the laws and treaties of the United States made according to it, to be the supreme law of the land, and that "the judges in every state shall be bound thereby, any thing in the laws or constitutions of any state notwithstanding." It validates national debt created under the Articles of Confederation and requires that all federal and state legislators, officers, and judges take oaths or affirmations to support the Constitution. This means that the states' constitutions and laws should not conflict with the laws of the federal constitution and that in case of a conflict, state judges are legally bound to honor the federal laws and constitution over those of any state.
Article Six also states "no religious Test shall ever be required as a Qualification to any Office or public Trust under the United States."
Ratification clause. Article VII, Section 1. Article Seven details how to initiate the new government as proposed. The Constitution was transmitted to the Articles Congress, then after debate, forwarded to the states. States were to ratify the Constitution in state conventions specially convened for that purpose. The ratification conventions would arise directly from the people voting, and not by the forms of any existing State constitutions.
The new national Constitution would not take effect until at least nine states ratified. It would replace the existing government under the Articles of Confederation only after three-fourths of the existing states agreed to move together by special state elections for one-time conventions. It would apply only to those states that ratified it, and it would be valid for all states joining after. The Articles Congress certified eleven ratification conventions had adopted the proposed Constitution for their states on September 13, 1788, and in accordance with its resolution, the new Constitutional government began March 4, 1789. (See above Ratification and beginning.)
Amendment of the state Constitutions at the time of the 1787 Constitutional Convention required only a majority vote in a sitting legislature of a state, as duly elected representatives of its sovereign people. The next session of a regularly elected assembly could do the same. This was not the "fundamental law" the founders such as James Madison had in mind.
Nor did they want to perpetuate the paralysis of the Articles by requiring unanimous state approval. The Articles of Confederation had proven unworkable within ten years of its employment. Between the options for changing the "supreme law of the land", too easy by the states, and too hard by the Articles, the Constitution offered a federal balance of the national legislature and the states.
Changing the "fundamental law" is a two-part process of three steps: amendments are proposed then they must be ratified by the states. An Amendment can be proposed one of two ways. Both ways have two steps. It can be proposed by Congress, and ratified by the states. Or on demand of two-thirds of the state legislatures, Congress could call a constitutional convention to propose an amendment, then to be ratified by the states.
To date, all amendments, whether ratified or not, have been proposed by a two-thirds vote in each house of Congress. Over 10,000 constitutional amendments have been introduced in Congress since 1789; during the last several decades, between 100 and 200 have been offered in a typical congressional year. Most of these ideas never leave Congressional committee, and of those reported to the floor for a vote, far fewer get proposed by Congress to the states for ratification.[i]
In the first step, the proposed Amendment must find a national super majority of 67% in Congress, both House (people) and Senate (states). The second step requires a super-super 75% majority of the states ratifying, representing a majority of the people in the states ratifying. Congress determines whether the state legislatures or special state conventions ratify the amendment.
On attaining Constitutional ratification of the proposal by three-fourths of the states, at that instant, the "fundamental law" for the United States of America is expressed in that Amendment. It is operative without any additional agency. Although the Founders considered alternatives, no signature is required from the President. Congress does not have to re-enact. The Supreme Court does not have to deliberate. There is no delay from a panel of lawyers to re-draft and re-balance the entire Constitution incorporating the new wording. The Amendment, with the last required state ratifying, is the "supreme law of the land."
Unlike amendments to most constitutions, amendments to the United States Constitution are appended to the body of the text without altering or removing what already exists. Newer text is given precedence.[j] Subsequent printed editions of the Constitution may line through the superseded passages with a note referencing the Amendment. Notes often cite applicable Supreme Court rulings incorporating the new fundamental law into American jurisprudence, when the first precedent was given, and in what way the earlier provisions were found void.
Over the last thirty years, there have been a few proposals for amendments in mainstream political debate. These include the Federal Marriage Amendment, the Balanced Budget Amendment, and the Flag Desecration Amendment. Another may be repeal of the 17th Amendment, restoring selection of U.S. Senators to state legislatures.
The Constitution has twenty-seven amendments. The first ten, collectively known as the Bill of Rights, were ratified simultaneously by 1791. The next seventeen were ratified separately over the next two centuries.
The National Archives displays the Bill of Rights as one of the three "Charters of Freedom". The original intent of these first ten Amendments was to restrict Congress from abusing its power. For example, the First Amendment – "Congress shall make no law" establishing a religion – was ratified by the states before all states had, of their own accord, disestablished their official churches.
The Federalist Papers argued that amendments were not necessary to adopt the Constitution. But without the promise in their ratification conventions, Massachusetts, Virginia and New York could not have joined the Union as early as 1789. James Madison, true to his word, managed the proposed amendments through the new House of Representatives in its first session. The amendments that became the Bill of Rights were ten proposals of the twelve that Congress sent out to the states in 1789.[k]
Later in American history, applying the Bill of Rights directly to the states developed only with the Fourteenth Amendment.
|Wikisource has original text related to this article:|
No State shall make or enforce any law which shall abridge the privileges ... of citizens ... nor ... deprive any person of life, liberty, or property, without due process of law; nor deny ... the equal protection of the laws.
The legal mechanism that courts use today to extend the Bill of Rights against the abuses of state government is called "incorporation". The extent of its application is often at issue in modern jurisprudence.
Generally, the Bill of Rights can be seen as the States addressing three major concerns: individual rights, federal courts and the national government’s relationships with the States.
The first Amendment defines American political community, based on individual integrity and voluntary association. Congress cannot interfere with an individual’s religion or speech. It cannot restrict a citizen’s communication with others to form community by worship, publishing, gathering together or petitioning the government.
Given their history of colonial government, most Americans wanted guarantees against the central government using the courts against state citizens. The Constitution already had individual protections such as strictly defined treason, no ex post facto law and guaranteed habeas corpus except during riot or rebellion. Now added protections came in five Amendments.
In 1789, future Federal-state relations were uncertain. To begin, the States in their militias were not about to be disarmed. And, if Congress wanted a standing army, Congress would have to pay for it, not "quarter" soldiers at state citizen expense. The people always have all their inalienable rights, even if they are not all listed in government documents. If Congress wanted more power, it would have to ask for it from the people in the states. And if the Constitution did not say something was for Congress to do, then the States have the power to do it without asking.
The Second Amendment guarantees the right of adult men to keep their own weapons apart from state-run arsenals.[l] Once the new Constitution began government, states petitioned Congress to propose amendments including militia protections. New Hampshire’s proposal for amendment was, "Congress shall never disarm any citizen unless such as are or have been in actual rebellion." New York proposed, "... a well regulated militia, including the body of the people capable of bearing arms, is the proper, natural and safe defense of a free State."[m] Over time, this amendment has been confirmed by the courts to protect individual rights and used to overturn state legislation regulating hand guns.
Applying the Second Amendment only to the Federal government, and not to the states, persisted for much of the nation's early history. It was sustained in United States v. Cruikshank (1876) to support disarming African-Americans holding arms in self-defense from Klansmen in Louisiana. The Supreme Court held, citizens must "look for their protection against any violation by their fellow-citizens from the state, rather than the national, government." Federal protection of an individual interfering with the state’s right to disarm any of its citizens came in Presser v. Illinois (1886). The Supreme Court ruled the citizens were members of the federal militia, as were "all citizens capable of bearing arms." A state cannot "disable the people from performing their duty to the General Government". The Court was harking back to the language establishing a federal militia in 1792.[n]
In 1939, the Supreme Court returned to a consideration of militia. In U.S. v. Miller, the Court addressed the enforceability of the National Firearms Act of 1934 prohibiting a short-barreled shotgun. Held in the days of Bonnie Parker and Clyde Barrow, this ruling referenced units of well equipped, drilled militia, the Founders "trainbands", the modern military Reserves.[o] It did not address the tradition of an unorganized militia. Twentieth century instances have been rare but Professor Stanford Levinson has observed consistency requires giving the Second Amendment the same dignity of the First, Fourth, Ninth and Tenth.[p]
Once again viewing federal relationships, the Supreme Court in McDonald v. Chicago (2010) determined that the right of an individual to "keep and bear arms" is protected by the Second Amendment. It is incorporated by the Due Process Clause of the Fourteenth Amendment, so it applies to the states.
The Third Amendment prohibits the government from using private homes as quarters for soldiers during peacetime without the consent of the owners. The states had suffered during the Revolution following the British Crown confiscating their militia's arms stored in arsenals in places such as Concord, Massachusetts, and Williamsburg, Virginia. Patrick Henry had rhetorically asked, shall we be stronger, "when we are totally disarmed, and when a British Guard shall be stationed in every house?" The only existing case law directly regarding this amendment is a lower court decision in the case of Engblom v. Carey. However, it is also cited in the landmark case, Griswold v. Connecticut, in support of the Supreme Court's holding that the constitution protects the right to personal privacy.
The Ninth Amendment declares that the listing of individual rights in the Constitution and Bill of Rights is not meant to be comprehensive; and that the other rights not specifically mentioned are retained by the people. The Tenth Amendment reserves to the states respectively, or to the people, any powers the Constitution did not delegate to the United States, nor prohibit the states from exercising.
|Wikisource has original text related to this article:|
Amendments to the Constitution after the Bill of Rights cover many subjects. The majority of the seventeen later amendments stem from continued efforts to expand individual civil or political liberties, while a few are concerned with modifying the basic governmental structure drafted in Philadelphia in 1787. Although the United States Constitution has been amended 27 times, only 26 of the amendments are currently in effect because the twenty-first amendment supersedes the eighteenth.
Several of the amendments have more than one application, but five amendments have concerned citizen rights. American citizens are free. There will be equal protection under the law for all. Men vote, women vote, DC residents vote,[q] and 18-year olds vote.
The Thirteenth Amendment (1865) abolishes slavery and authorizes Congress to enforce abolition. The Fourteenth Amendment (1868) in part, defines a set of guarantees for United States citizenship. Fifteenth Amendment (1870) prohibits the federal government and the states from using a citizen's race, color, or previous status as a slave as a qualification for voting. The Nineteenth Amendment (1920) prohibits the federal government and the states from forbidding any citizen the right to vote due to her sex. The Twenty-sixth Amendment (1971) prohibits the federal government and the states from forbidding any citizen of age 18 or greater the right to vote on account of his or her age.
The Twenty-third Amendment (1961) grants presidential electors to the District of Columbia. DC has three votes in the Electoral College as though it were a state with two senators and one representative in perpetuity. If Puerto Rico were given the same consideration, it would have seven Electoral College votes.[r]
Seven amendments relate to the three branches of the Federal government. Congress has three, the Presidency has four, the Judiciary has one.
The Sixteenth Amendment (1913) authorizes unapportioned federal taxes on income. Twentieth Amendment (1933), in part, changes details of congressional terms. The Twenty-seventh Amendment (1992) limits congressional pay raises.
The Twelfth Amendment (1804) changes the method of presidential elections so that members of the Electoral College cast separate ballots for president and vice president. The Twentieth Amendment (1933), in part, changes details of presidential terms and of presidential succession. The Twenty-second Amendment (1951) limits the president to two terms. The Twenty-fifth Amendment (1967) further changes details of presidential succession, provides for temporary removal of president, and provides for replacement of the vice president.
The Eleventh Amendment (1795), in part, clarifies judicial power over foreign nationals.
State citizens. The states have been protected from their citizens by a Constitutional Amendment. Citizens are limited when suing their states in Federal Court. The Eleventh Amendment (1795) in part, limits ability of citizens to sue states in federal courts and under federal law.
Most states. All states have been required to conform to the others when those delegations in Congress could accumulate super-majorities in the U.S. House and U.S. Senate, and three-fourths of the states with the same opinion required it of all. (a) The states must not allow alcohol sold for profit. (b) The states may or may not allow alcohol sold for profit. The Eighteenth Amendment (1919) prohibited the manufacturing, importing, and exporting of alcoholic beverages (see Prohibition in the United States). Repealed by the Twenty-First Amendment. Twenty-first Amendment (1933) repeals Eighteenth Amendment. Permits states to prohibit the importation of alcoholic beverages.
State legislatures. Occasionally in American history, the people have had to strip state legislatures of some few privileges due to widespread, persisting violations to individual rights. States must administer equal protection under the Constitution and the Bill of Rights. States must guarantee rights to all citizens of the United States as their own. State legislatures will not be trusted to elect U.S. Senators. States must allow all men to vote. States must allow women to vote. States cannot tax a U.S. citizen’s right to vote.
Of the thirty-three amendments that have been proposed by Congress, twenty-seven have passed. Six have failed ratification by the required three-quarters of the state legislatures. Two have passed their deadlines. Four are technically in the eyes of a Court, still pending before state lawmakers (see Coleman v. Miller). All but one are dead-ends.
The "Titles of Nobility Amendment" (TONA), proposed by the 11th Congress on May 1, 1810, would have ended the citizenship of any American accepting "any Title of Nobility or Honour" from any foreign power. Some maintain that the amendment was ratified by the legislatures of enough states, and that a conspiracy has suppressed it, but this has been thoroughly debunked.
The proposed amendment addressed the same "republican" and nationalist concern evident in the original Constitution, Article I, Section 9. No officer of the United States, "without the Consent of the Congress, [shall] accept of any present, Emolument, Office, or Title, of any kind whatever, from any King, Prince or foreign State." The Constitutional provision is unenforceable because the offense is not subject to a penalty.
Known to have been ratified by lawmakers in twelve states, the last in 1812, this amendment contains no expiration date for ratification and could still be ratified were the state legislatures to take it up.
Starting with the proposal of the 18th Amendment in 1917, each proposed amendment has included a deadline for passage in the text of the amendment. Five without a deadline became Amendments.[s] One proposed amendment without a deadline has not been ratified: The Child Labor Amendment of 1924.
There are two amendments that were approved by Congress but were not ratified by enough states prior to the ratification deadline set by Congress:
The way the Constitution is understood is influenced by court decisions, especially those of the Supreme Court. These decisions are referred to as precedents. Judicial review is the power of the Court to examine federal legislation, executive agency rules and state laws, to decide their constitutionality, and to strike them down if found unconstitutional.
Judicial review includes the power of the Court to explain the meaning of the Constitution as it applies to particular cases. Over the years, Court decisions on issues ranging from governmental regulation of radio and television to the rights of the accused in criminal cases have changed the way many constitutional clauses are interpreted, without amendment to the actual text of the Constitution.
Legislation passed to implement the Constitution, or to adapt those implementations to changing conditions, broadens and, in subtle ways, changes the meanings given to the words of the Constitution. Up to a point, the rules and regulations of the many federal executive agencies have a similar effect. If an action of Congress or the agencies is challenged, however, it is the court system that ultimately decides whether these actions are permissible under the Constitution.
The Supreme Court has indicated that once the Constitution has been extended to an area (by Congress or the Courts), its coverage is irrevocable. To hold that the political branches may switch the Constitution on or off at will would lead to a regime in which they, not this Court, say "what the law is.".[t]
Courts established by the Constitution can regulate government under the Constitution, the supreme law of the land. First, they have jurisdiction over actions by an officer of government and state law. Second, Federal courts may rule on whether coordinate branches of national government conform to the Constitution. Until the Twentieth Century, the Supreme Court of the United States may have been the only high tribunal in the world to use a court for constitutional interpretation of fundamental law, others generally depending on their national legislature.
The basic theory of American Judicial review is summarized by constitutional legal scholars and historians as follows: the written Constitution is fundamental law. It can change only by extraordinary legislative process of national proposal, then state ratification. The powers of all departments are limited to enumerated grants found in the Constitution. Courts are expected (a) to enforce provisions of the Constitution as the supreme law of the land, and (b) to refuse to enforce anything in conflict with it.
In Convention. As to judicial review and the Congress, the first proposals by Madison (Va) and Wilson (Pa) called for a supreme court veto over national legislation. In this it resembled the system in New York, where the Constitution of 1777 called for a "Council of Revision" by the Governor and Justices of the state supreme court. The Council would review and in a way, veto any passed legislation violating the spirit of the Constitution before it went into effect. The nationalist’s proposal in Convention was defeated three times, and replaced by a presidential veto with Congressional over-ride. Judicial review relies on the jurisdictional authority in Article III, and the Supremacy Clause.
The justification for judicial review is to be explicitly found in the open ratifications held in the states and reported in their newspapers. John Marshall in Virginia, James Wilson in Pennsylvania and Oliver Ellsworth of Connecticut all argued for Supreme Court judicial review of acts of state legislature. In Federalist No. 78, Alexander Hamilton advocated the doctrine of a written document held as a superior enactment of the people. "A limited constitution can be preserved in practice no other way" than through courts which can declare void any legislation contrary to the Constitution. The preservation of the people’s authority over legislatures rests "particularly with judges."[u]
The Supreme Court was initially made up of jurists who had been intimately connected with the framing of the Constitution and the establishment of its government as law. John Jay (NY), a co-author of the Federalist Papers, served as Chief Justice for the first six years. The second Chief Justice for a term of four years, was Oliver Ellsworth (Ct), a delegate in the Constitutional Convention, as was John Rutledge (SC), Washington’s recess appointment as Chief Justice who served in 1795. John Marshall (Va), the fourth Chief Justice, had served in the Virginia Ratification Convention in 1788. His service on the Court would extend 34 years over some of the most important rulings to help establish the nation the Constitution had begun. In the first years of the Supreme Court, members of the Constitutional Convention who would serve included James Wilson (Pa) for ten years, John Blair, Jr. (Va) for five, and John Rutledge (SC) for one year as Justice, then Chief Justice in 1795.
When John Marshall followed Oliver Ellsworth as Chief Justice of the Supreme Court in 1801, the federal judiciary had been established by the Judiciary Act, but there were few cases, and less prestige. "The fate of judicial review was in the hands of the Supreme Court itself." Review of state legislation and appeals from state supreme courts was understood. But the Court’s life, jurisdiction over state legislation was limited. The Marshall Court's landmark Barron v. Baltimore held that the Bill of Rights restricted only the federal government, and not the states.
In the landmark Marbury v. Madison case, the Supreme Court asserted its authority of judicial review over Acts of Congress. It finds were that Marbury and the others had a right to their commissions as judges in the District of Columbia. The law afforded Marbury a remedy at court. Then Marshall, writing the opinion for the majority, announced his discovered conflict between Section 13 of the Judiciary Act of 1789 and Article III.[v][w] The United States government, as created by the Constitution is a limited government, and a statute contrary to it is not law. In this case, both the Constitution and the statutory law applied to the particulars at the same time. "The very essence of judicial duty" according to Marshall was to determine which of the two conflicting rules should govern. The Constitution enumerates powers of the judiciary to extend to cases arising "under the Constitution." Courts were required to choose the Constitution over Congressional law. Further, justices take a Constitutional oath to uphold it as "Supreme law of the land".
"This argument has been ratified by time and by practice ..."[x][y] "Marshall The Supreme Court did not declare another Act of Congress unconstitutional until the disastrous Dred Scott decision in 1857, held after the voided Missouri Compromise statute, had already been repealed. In the eighty years following the Civil War to World War II, the Court voided Congressional statutes in 77 cases, on average almost one a year.
Something of a crisis arose when, in 1935 and 1936, the Supreme Court handed down twelve decisions voiding Acts of Congress relating to the New Deal. President Franklin D. Roosevelt then responded with his abortive "court packing plan". Other proposals have suggested a Court super-majority to overturn Congressional legislation, or a Constitutional Amendment to require that the Justices retire at a specified age by law. To date, the Supreme Court’s power of judicial review has persisted.
The power of judicial review could not have been preserved long in a democracy unless it had been "wielded with a reasonable measure of judicial restraint, and with some attention, as Mr. Dooley said, to the election returns." Indeed, the Supreme Court has developed a system of doctrine and practice that self-limits is power of judicial review.
The Court controls almost all of its business by choosing what cases to consider, writs of certiorari. In this way it can avoid expressing an opinion if it sees an issue is currently embarrassing or difficult. The Supreme Court limits itself by defining for itself what is a "justiciable question." First, the Court is fairly consistent in refusing to make any "advisory opinions" in advance of actual cases.[z] Second, "friendly suits" between those of the same legal interest are not considered. Third, the Court requires a "personal interest", not one generally held, and a legally protected right must be immediately threatened by government action. Cases are not taken up if the litigant has no standing to sue. Having the money to sue or being injured by government action alone are not enough.
These three procedural ways of dismissing cases have led critics to charge that the Supreme Court delays decisions by unduly insisting on technicalities in their "standards of litigability". Under the Court’s practice, there are cases left unconsidered which are in the public interest, with genuine controversy, and resulting from good faith action. "The Supreme Court is not only a court of law but a court of justice."
The Supreme Court balances several pressures to maintain its roles in national government. It seeks to be a co-equal branch of government, but its decrees must be enforceable. The Court seeks to minimize situations where it asserts itself superior to either President or Congress, but Federal officers must be held accountable. The Supreme Court assumes power to declare acts of Congress as unconstitutional but it self-limits its passing on constitutional questions. But the Court’s guidance on basic problems of life and governance in a democracy is most effective when American political life reinforce its rulings.
Justice Brandeis summarized four general guidelines that the Supreme Court uses to avoid constitutional decisions relating to Congress:[aa] The Court will not anticipate a question of constitutional law nor decide open questions unless a case decision requires it. If it does, a rule of constitutional law is formulated only as the precise facts in the case require. The Court will choose statutes or general law for the basis of its decision if it can without constitutional grounds. If it does, the Court will choose a constitutional construction of an Act of Congress, even if its constitutionality is seriously in doubt.
Likewise with the Executive Department, Edwin Corwin observed that the Court does sometimes rebuff presidential pretentions, but it more often tries to rationalize them. Against Congress, an Act is merely "disallowed." In the executive case, exercising judicial review produces "some change in the external world" beyond the ordinary judicial sphere. The "political question" doctrine especially applies to questions which present a difficult enforcement issue. Chief Justice Charles Evans Hughes addressed the Court’s limitation when political process allowed future policy change, but a judicial ruling would "attribute finality". Political questions lack "satisfactory criteria for a judicial determination."
John Marshall recognized how the president holds "important political powers" which as Executive privilege allows great discretion. This doctrine was applied in Court rulings on President (Grant)’s duty to enforce the law during Reconstruction. It extends to the sphere of foreign affairs. Justice Robert Jackson explained, Foreign affairs are inherently political, "wholly confided by our Constitution to the political departments of the government ... [and] not subject to judicial intrusion or inquiry."
Critics of the Court object in two principle ways to its self-restraint in judicial review, deferring as it does as a matter of doctrine to Acts of Congress and Presidential actions. (1) Its inaction is said to allow "a flood of legislative appropriations" which permanently create an imbalance between the states and federal government. (2) Supreme Court deference to Congress and the executive compromises American protection of civil rights, political minority groups and aliens.
Supreme Courts under the leadership of subsequent Chief Justices have also used judicial review to interpret the Constitution among individuals, states and Federal branches. Notable contributions were made by the Chase Court, the Taft Court, the Warren Court, and the Rehnquist Court.
Salmon P. Chase was a Lincoln appointee, serving as Chief Justice from 1864 to 1873. His career encompassed service as a U.S. Senator and Governor of Ohio. He has coined the slogan, "Free soil, free Labor, free men." One of Lincoln’s "team of rivals", he was appointed Secretary of Treasury during the Civil War, issuing "greenbacks". To appease radical Republicans, Lincoln appointed him to replace Chief Justice Roger B. Taney of Dred Scott case fame.
In one of his first official acts, Chase admitted John Rock, the first African-American to practice before the Supreme Court. The "Chase Court" is famous for Texas v. White which asserted a permanent Union of indestructible states. Veazie Banks v. Fenno upheld the Civil War tax on state banknotes. Hepburn v. Griswold found parts of the Legal Tender Acts unconstitutional, though it was reversed under a late Supreme Court majority.
As Chief Justice, he advocated the Judiciary Act of 1925 that brought the Federal District Courts under the administrative jurisdiction of the Supreme Court and the newly united branch of government initiated its own separate building in use today. Taft successfully sought the expansion of Court jurisdiction over non- states such as District of Columbia and Territories of Arizona, New Mexico, Alaska and Hawaii. Later extensions added the Spanish-American War acquisitions of the Commonwealth of the Philippines and Puerto Rico.
In 1925, the Taft Court issued a ruling overturning a Marshall Court ruling on the Bill of Rights. In Gitlow v. New York, the Court established the doctrine of "incorporation which applied the Bill of Rights to the states. Important cases included the Board of Trade v. Olsen that upheld Congressional regulation of commerce. Olmstead v. U.S. allowed exclusion of evidence obtained without a warrant based on application of the 14th Amendment proscription against unreasonable searches. Wisconsin v. Illinois ruled the equitable power of the United States can impose positive action on a state to prevent its inaction from damaging another state.
Earl Warren was an Eisenhower nominee, Chief Justice from 1943 to 1953. Warren’s Republican career in the law reached from County Prosecutor, California state attorney general, and three consecutive terms as Governor. His programs stressed progressive efficiency, expanding state education, re-integrating returning veterans, infrastructure and highway construction.
In 1954, the Warren Court overturned a landmark Fuller Court ruling on the Fourteenth Amendment interpreting racial segregation as permissible in government and commerce providing "separate but equal" services. Warren built a coalition of Justices after 1962 that developed the idea of natural rights as guaranteed in the Constitution. Brown v. Board of Education banned segregation in public schools. Baker v. Carr and Reynolds v. Sims established Court ordered "one-man-one-vote." Bill of Rights Amendments were incorporated into the states. Due process was expanded in Gideon v. Wainwright" and Miranda v. Arizona. First Amendment rights were addressed in Griswold v. Connecticut concerning privacy, and Engel v. Vitale relative to free speech.
William Rehnquist was a Reagan appointment to Chief Justice, serving from 1986 to 2005. While he would concur with overthrowing a state supreme court’s decision, as in Bush v. Gore, he built a coalition of Justices after 1994 that developed the idea of federalism as provided for in the Tenth Amendment. In the hands of the Supreme Court, the Constitution and its Amendments were to restrain Congress, as in City of Boerne v. Flores.
Nevertheless, the Rehnquist Court was noted in the contemporary "culture wars" for overturning state laws relating to privacy prohibiting late-term abortions in Stenberg v. Carhart, prohibiting sodomy in Lawrence v. Texas, or ruling so as to protect free speech in Texas v. Johnson or affirmative action in Grutter v. Bollinger.
There is a viewpoint that some Americans have come to see the documents of the Constitution, along with the Declaration of Independence and the Bill of Rights as being a cornerstone of a type of civil religion. This is suggested by the prominent display of the Constitution, along with the Declaration of Independence and the Bill of Rights, in massive, bronze-framed, bulletproof, moisture-controlled glass containers vacuum-sealed in a rotunda by day and in multi-ton bomb-proof vaults by night at the National Archives Building.
The idea of displaying the documents strikes some academic critics looking from point of view of the 1776 or 1789 America as "idolatrous, and also curiously at odds with the values of the Revolution." By 1816 Jefferson wrote that "[s]ome men look at constitutions with sanctimonious reverence and deem them like the ark of the covenant, too sacred to be touched." But he saw imperfections and imagined that potentially, there could be others, believing as he did that "institutions must advance also".
The United States Constitution has had a considerable influence worldwide on later constitutions. International leaders have followed it as a model within their own traditions. These leaders include Benito Juarez of Mexico, Jose Rizal of the Philippines and Sun Yat-sen of China.
The United States Constitution has faced various criticisms since its inception in 1787.
|Find more about United States Constitution on Wikipedia's sister projects:|
|Images and media from Commons
|Quotations from Wikiquote
|Source texts from Wikisource
|Textbooks from Wikibooks
Official U.S. government sources
Non-governmental web sites
Contenu de sensagent
Dictionnaire et traducteur pour mobile
Nouveau : sensagent est maintenant disponible sur votre mobile
dictionnaire et traducteur pour sites web
Une fenêtre (pop-into) d'information (contenu principal de Sensagent) est invoquée un double-clic sur n'importe quel mot de votre page web. LA fenêtre fournit des explications et des traductions contextuelles, c'est-à-dire sans obliger votre visiteur à quitter votre page web !
Avec la boîte de recherches Sensagent, les visiteurs de votre site peuvent également accéder à une information de référence pertinente parmi plus de 5 millions de pages web indexées sur Sensagent.com. Vous pouvez Choisir la taille qui convient le mieux à votre site et adapter la charte graphique.
Solution commerce électronique
Augmenter le contenu de votre site
Ajouter de nouveaux contenus Add à votre site depuis Sensagent par XML.
Parcourir les produits et les annonces
Obtenir des informations en XML pour filtrer le meilleur contenu.
Indexer des images et définir des méta-données
Fixer la signification de chaque méta-donnée (multilingue).
Renseignements suite à un email de description de votre projet.
Jeux de lettres
Lettris est un jeu de lettres gravitationnelles proche de Tetris. Chaque lettre qui apparaît descend ; il faut placer les lettres de telle manière que des mots se forment (gauche, droit, haut et bas) et que de la place soit libérée.
Il s'agit en 3 minutes de trouver le plus grand nombre de mots possibles de trois lettres et plus dans une grille de 16 lettres. Il est aussi possible de jouer avec la grille de 25 cases. Les lettres doivent être adjacentes et les mots les plus longs sont les meilleurs. Participer au concours et enregistrer votre nom dans la liste de meilleurs joueurs ! Jouer
Dictionnaire de la langue française
La plupart des définitions du français sont proposées par SenseGates et comportent un approfondissement avec Littré et plusieurs auteurs techniques spécialisés.
Le dictionnaire des synonymes est surtout dérivé du dictionnaire intégral (TID).
L'encyclopédie française bénéficie de la licence Wikipedia (GNU).
Les jeux de lettres anagramme, mot-croisé, joker, Lettris et Boggle sont proposés par Memodata.
Le service web Alexandria est motorisé par Memodata pour faciliter les recherches sur Ebay. La SensagentBox est offerte par sensAgent.
Changer la langue cible pour obtenir des traductions.
Astuce: parcourir les champs sémantiques du dictionnaire analogique en plusieurs langues pour mieux apprendre avec sensagent. | http://dictionnaire.sensagent.com/United%20States%20Constitution/en-en/ | 13 |
26 | What Is the Environmental Impact of Petroleum and Natural Gas?
What you know as oil is actually called petroleum or crude oil and may exist as a combination of liquid, gas, and sticky, tar-like substances. Oil and natural gas are cleaner fuels than coal, but they still have many environmental disadvantages.
The secret to fossil fuels’ ability to produce energy is that they contain a large amount of carbon. This carbon is left over from living matter — primarily plants — that lived millions of years ago. Oil and natural gas are usually the result of lots of biological matter that settles to the seafloor, where the hydrocarbons (molecules of hydrogen and carbon), including methane gas, become trapped in rocks.
Petroleum sources are usually small pockets of liquid or gas trapped within rock layers deep underground (often under the seafloor). Extracted crude oil is refined and used to manufacture gasoline (used in transportation) and petrochemicals (used in the production of plastics, pharmaceuticals, and cleaning products).
Like other resources, oil isn’t evenly distributed across the globe. The top oil-producing countries are Saudi Arabia, Russia, the U.S., Iran, China, Canada, and Mexico. Together, these countries produce more than half of the total oil resources in the world.
While some petroleum is found in gas form, the most common natural gas is methane. Methane usually occurs in small amounts with petroleum deposits and is often extracted at the same time as the petroleum. Natural gas can be found in certain rock layers, trapped in the tiny spaces in sedimentary rocks.
The environmental impact of drilling for oil
Oil companies pump liquid oil out of the ground by using drilling rigs and wells that access the pockets of oil resources. The oil fills the rock layers the way water fills a sponge — spreading throughout open spaces — instead of existing as a giant pool of liquid.
This arrangement means that to pump out all the oil, drillers have to extend or relocate the wells after the immediate area has been emptied. Oil drilling rigs set on platforms in the ocean to access oil reserves below the seafloor must therefore employ a series of more technically complex drill rigs built to access oil reserves in deeper water.
This figure illustrates some of the most commonly used ocean drilling rigs and platforms and the water depths they’re most suited for.
Oil is a cleaner fuel than coal, but it still has many disadvantages, such as the following:
Refining petroleum creates air pollution. Transforming crude oil into petrochemicals releases toxins into the atmosphere that are dangerous for human and ecosystem health.
Burning gasoline releases CO2. Although oil doesn’t produce the same amount of CO2 that coal burning does, it still contributes greenhouse gases to the atmosphere and increases global warming.
Oil spills cause great environmental damage. Large oil spills sometimes occur during drilling, transport, and use, which of course affect the surrounding environment. But these spills aren’t the only risk.
Although large oil spills with catastrophic environmental effects — such as the 1989 Exxon Valdez in Alaska or the 2010 BP Deepwater Horizon in the Gulf of Mexico — get the most media coverage, most of the oil spilled into ecosystems is actually from oil that leaks from cars, airplanes, and boats, as well as illegal dumping.
The environmental impact of fracking for natural gas
Natural gas is a relatively clean-burning fuel source — it produces approximately half the CO2 emissions that coal burning produces — so demand for natural gas has increased in the last few decades as concerns grow about carbon emissions and global warming.
Now fuel producers are exploring natural gas in reservoirs separate from petroleum as sources of this fuel. To release the gas from the rocks and capture it for use as fuel, companies use a method of hydraulic fracturing, or fracking.
Fracking for natural gas requires injecting a liquid mix of chemicals, sand, and water into the gas-bearing rock at super high pressures — high enough to crack open the rock, releasing trapped gases. The gas is then pumped out of the rock along with the contaminated water.
The sand and chemicals are left behind in the rock fractures, leading to groundwater pollution and potentially less stable bedrock. Currently scientists are concerned that earthquakes in regions of the Midwestern United States that have never experienced earthquakes before are the result of wastewater from natural gas fracking operations. | http://www.dummies.com/how-to/content/what-is-the-environmental-impact-of-petroleum-and-.navId-813972.html | 13 |
15 | Algae are primarily aquatic plant-like organisms that convert light, carbon dioxide (CO2), water, and nutrients such as nitrogen and phosphorus into oxygen and biomass, including lipids - the generic name for the primary storage form of natural oils. Single-cell or microalgae are most interesting because of the speed and efficiency at which they produce lipids. However, some of them can be susceptible to contamination from bacteria, viruses, and other undesirable algal species, which can reduce the quality and yield of the lipids. Consequently, researchers are trying to develop algal species that are both efficient at lipid production and resistant to contamination.
Algae can produce more lipids per acre of harvested land than terrestrial plants because of their high lipid content and rapid growth rates. The National Renewable Energy Laboratory (NREL) estimates that the oil yield for a moderately productive algal species could be about 1,200 gallons per acre; compared to 48 gallons per acre for soybeans. The high productivity of algae could significantly reduce the land use associated with production of biofuels. For example, it would take 62.5 million acres of soybeans (an area approximately the size of Wyoming) to produce the same 3 billion gallons of oil that could be produced from only 2.5 million acres of algae (an area approximately 70 percent the size of Connecticut). Three billion gallons of biodiesel represent about 8 percent of all the diesel fuel used for on-road transportation in the United States in 2008.
Algae have other desirable properties. Some can be grown on non-arable or non-productive land. They grow in brackish, saline, and fresh water, and can thrive in wastewater. Though algae can also produce valuable products such as vitamins and dietary supplements, they are not themselves a human food source so there is no direct competition between food and fuel. They do, however, compete with some of the nutrients required for growing food. Since they require CO2 for growth, algae can also sequester CO2 from power plants or other CO2 sources.
Currently, there are open and closed approaches to cultivating algae. Open cultivation essentially grows algae much like it grows in nature. Open systems usually consist of one or more ponds exposed to the atmosphere, or protected in greenhouses. Although open systems are the cheapest of current cultivation approaches, they create more opportunities for contamination. Other disadvantages include lack of temperature and light control, requiring that open systems must be located where the climate is warm and sunlight is abundant.
Closed systems, called photobioreactors, typically comprise enclosed translucent containers that allow photosynthesis to occur. The plastic or glass containers are arranged to maximize algae exposure to light. Indoor systems require artificial light, while outdoor systems can use natural sunlight or a combination of sunlight and artificial illumination. In closed systems, temperature, evaporation loss, light intensity, and contamination by other algal species can be controlled better. However, elements needed for algal growth, such as water, CO2, and other minerals, must be artificially introduced. Scaling these input requirements for commercial production is difficult and expensive. Capital costs for closed systems are generally substantially higher than for open systems.
Scalability remains a major obstacle. Harvesting and oil extraction are relatively costly. Large volumes of water need to be managed and recycled in the processing of algae. In addition, the use of chemical solvents for extracting the oil and energy requirements for each phase of the harvesting and oil extraction process add cost to the process. Once the oil has been extracted, various conversion pathways exist for transforming the oil into a liquid fuel. Transesterification (described in the April 21, 2010 TWIP is the pathway from algal oil to biodiesel. Alternatively, refining of algal oil yields renewable diesel or jet fuel, very similar to fuels produced from petroleum.
Currently, most estimates of the production cost of algal oil range from $4-$40 per gallon depending on the type of cultivation system used. Despite the many challenges, however, the Federal government, large energy companies, and venture capitalists are continuing to fund demonstration projects and research into developing algae-based biofuels for commercial application.
U.S. Average Gasoline and Diesel Prices Moving Up The U.S. average price for regular gasoline increased about three cents to $2.75 per gallon, $0.25 higher than a year ago. The average on the East Coast moved up two cents to $2.67 per gallon. The largest increase occurred in the Midwest where the average went up nearly four cents to $2.74 per gallon. The Gulf Coast increased less than four cents to $2.59 per gallon. The average in the Rocky Mountains was essentially unchanged at $2.75 per gallon. On the West Coast, the average rose a penny to $3.07 per gallon. Inching up a fraction of a cent, the average in California was virtually unchanged at $3.13 per gallon.
For the first time since the week of June 21, the national average price for diesel fuel increased, moving up two cents to $2.92 per gallon, $0.39 above last year. Average prices on the East Coast and the Gulf Coast increased about two cents to $2.93 per gallon and $2.88 per gallon, respectively. The largest increase took place in the Midwest where the price climbed two and a half cents to $2.89 per gallon. Prices in the Rocky Mountains and on the West Coast went up nearly two cents to $2.92 per gallon and $3.06 per gallon, respectively. The average in California rose a penny to $3.13 per gallon.
Propane Inventories Edge Up U.S. propane inventories continued their seasonal growth last week, edging up by a modest 0.5 million barrels to end at 53.0 million barrels total. The Midwest region gained the bulk of the stocks with 0.6 million barrels of new inventory. The Rocky Mountain/West Coast region added 0.1 million barrels and the East Coast regional stocks were effectively unchanged. The Gulf Coast region drew 0.1 million barrels of propane stocks. Propylene non-fuel use inventories decreased their share of total propane/propylene stocks from 6.0 percent to 5.4 percent.
Text from the previous editions of This Week In Petroleum is accessible through a link at the top right-hand corner of this page. | http://www.ibtimes.com/week-petroleum-algae-potential-source-future-fuels-431611 | 13 |
21 | History of California
Introduction to History of California
Pre-Columbian California was inhabited by numerous Indian tribes, most of them living by hunting and gathering. Their main foods were acorns, fish, and game. Among the major tribes were the Hupa, Mojave, Modoc, and Yuma. At the time of contact with the Spanish, there were at least 150,000 Indians in California.
|Important dates in California|
|1542||Juan Rodriguez Cabrillo explored San Diego Bay.|
|1579||Francis Drake sailed along the coast and claimed California for England.|
|1602||Sebastian Vizcaino urged that Spain colonize California.|
|1769||Gaspar de Portola led a land expedition up the California coast. Junipero Serra established the first Franciscan mission in California, in what is now the city of San Diego.|
|1776||Spanish settlers from New Spain (Mexico) reached the site of what is now San Francisco.|
|1812||Russian fur traders built Fort Ross.|
|1822||California became part of Mexico, which had just won its independence from Spain in 1821.|
|1841||The Bidwell-Bartleson party became the first organized group of American settlers to travel to California by land.|
|1846||American rebels raised the "Bear Flag" of the California Republic over Sonoma. U.S. forces conquered California during the Mexican War (1846-1848).|
|1848||James W. Marshall discovered gold at Sutter's Mill. The discovery led to the California gold rush. The United States defeated Mexico in the Mexican War and acquired California in the Treaty of Guadalupe Hidalgo.|
|1850||California became the 31st state on September 9.|
|1880's||A population boom occurred as a result of a railroad and real estate publicity campaign that brought thousands of people to California.|
|1906||An earthquake and fire destroyed much of San Francisco.|
|1915||Expositions were begun at San Diego and San Francisco to mark the opening of the Panama Canal.|
|1945||The United Nations Charter was adopted at the San Francisco Conference.|
|1963||California became the state with the largest population in the United States.|
|1978||California voters approved a $7-billion cutback in state property taxes.|
|1989||A strong earthquake struck the San Francisco-Oakland-San Jose area.|
|1994||A strong earthquake struck Los Angeles.|
|2003||Voters recalled Governor Gray Davis and elected motion-picture star Arnold Schwarzenegger to replace him.|
After the Spanish conquered Mexico in the early 16th century, they searched for other areas rich in gold, sending several sea expeditions northward along the Pacific coast. The first of these to reach what is now California was led by Juan Rodríguez Cabrillo, who in 1542 sailed from Navidad, Mexico, up the coast of California. His expedition reached San Diego Bay and claimed the land for Spain.
In 1579 the English privateer Francis Drake, on a voyage around the world, landed near San Francisco Bay and claimed the land for England. The Spanish reaffirmed their own claim to California by further exploration. They were motivated in part by the need for a safe harbor for returning Manila galleons, as ships engaged in trade between Mexico and the Philippines were called. In 1595, Sebastián Rodríguez Cermeño landed at what is now called Drake's Bay. (The bay, a short distance north of San Francisco Bay, is probably the site of Drake's landing of 1579.) In 1602 Sebastián Vizcaíno landed at Monterey Bay.
Colonization did not begin until 1769, when an expedition under Caspar de Portolá and Franciscan Father Junípero Serra established presidios (military posts) and missions at San Diego and Monterey. After San Diego had been reached, Portolá led an overland expedition in search of Monterey Bay, but he failed to recognize it and continued on, discovering San Francisco Bay.
A total of 21 missions were eventually established, placed about every 30 miles (48 km) from San Diego to Sonoma. The missions were intended to convert and educate the Indians. The Indians were encouraged to live at the missions and give up their traditional ways. However, the missionaries often treated the Indians like slaves, and occasional revolts were ruthlessly quelled.
The first Spanish settlers arrived with Juan Bautista de Anza in 1776 and founded San Francisco. To encourage settlement, the Spanish government made land grants to the Californios, as the Spanish colonists were called. The Californios established ranchos, or cattle ranches, and the production of hides, meat, and tallow became the mainstays of colonial California's economy. Mexico, including California, gained independence from Spain during 1821-22.
In 1812, Russians engaged in fur trading established an outpost, Fort Ross, north of San Francisco. They left in 1841, when the otter and seal in the region had been almost exterminated. In 1826, Jedediah Smith, a fur trapper and explorer, became the first American to reach California overland from the east. He was soon followed by other fur trappers and by American settlers.
Annexation and the Gold Rush
John C. Frémont, a U.S. Army officer, led a scientific expedition to California in 1844. In 1846, on a second trip, he encouraged the American ranchers in the north to revolt against Mexican rule. They seized Sonoma and proclaimed a republic. Meanwhile, the Mexican War had started, and an American naval squadron soon seized Monterey. The north was quickly taken; the south fell to American forces under General Stephen W. Kearny and Commodore Robert F. Stockton in 1847. Mexico ceded California to the United States under the terms of the Treaty of Guadalupe Hidalgo, 1848, which ended the Mexican War.
In that year, gold was discovered by James Marshall at Sutter's Mill, located in what is now El Dorado County. By 1849, prospectors from all parts of the United States and from many foreign countries were rushing to northern California. Many of the “Forty-niners,” as the prospectors were called, arrived with exaggerated expectations of gaining easy wealth, but the reality was that few miners became rich. Nonetheless, the population continued to grow, increasing in two years from about 20,000 to more than 90,000, as immigrants were lured by the fertile soil and pleasant climate. California became a state in 1850, and in 1854 Sacramento was made the capital.The gold rush of 1849 began after James W. Marshall found gold near Sutter's Mill on the American River. News of his discovery spread rapidly.
A Century of Development
Because of its distance and isolation from the rest of the country, travel to and communication with California was difficult. Early settlers either traveled overland; sailed around Cape Horn; or sailed to the Central American isthmus, crossed to the Pacific side, and took another ship to California. The Pony Express offered mail service between the east and California for 18 months until the first transcontinental telegraph went into operation in 1861.
Construction of the first transcontinental railroad began in 1863, with the Central Pacific Railroad being built eastward from Sacramento. The line met the Union Pacific, being built westward, in 1869. The connection of the Southern Pacific Railroad to eastern lines in 1881 created a second rail link to the east. Large numbers of settlers traveled to California by railroad.
What the new immigrants found was often disappointing. Railroad barons and large landowners controlled California's government and much of its economy. Corruption was widespread, railroad freight rates were exorbitant, and wages were low. Much of the land was arable only with irrigation, but, under California law, those who owned land along riverbanks were able to deny others access to water. This was changed after the passage of the Wright Act in 1887, permitting the formation of irrigation districts so that water could be distributed more fairly.
Fear of competition from Chinese laborers, who had come to California in large numbers to help build the railroads, led to anti-Chinese sentiment. There was sporadic anti-Chinese violence during the 1860's and 1870's, and discriminatory laws were passed. Japanese workers, first brought to California as farm laborers, also faced discrimination.
In the 1870's, citrus fruit growing expanded, especially after the introduction of navel and Valencia oranges assured year-round harvests. Lemons and grapefruits were soon introduced, and shipments to eastern markets by refrigerated boxcars began. Also about this time, oil was discovered in several parts of the state.
The population grew by 60 per cent during 1900-10. In 1906, much of San Francisco was destroyed by an earthquake and fire. In 1910, the voters rebelled against corruption and elected Hiram Johnson, a reform candidate, governor. Many progressive laws were passed, and the influence of the railroads in politics decreased. At about this time, the motion picture industry became centered in Hollywood.The San Francisco earthquake and fire of 1906 destroyed about 28,000 buildings and killed at least 3,000 people. But the city was soon rebuilt.
During the decade beginning in 1910, there were several instances of violence caused by labor strife. In 1910, union radicals bombed the building of the anti-union Los Angeles Times, killing 20 persons. In 1913, near the town of Wheatland, a riot occurred when the sheriff attempted to arrest the leaders of striking farm workers. Four people were killed, and many more injured. In 1917, labor leader Thomas J. Mooney and an associate, Warren K. Billings, were convicted of planting a bomb that had killed 10 people in San Francisco the previous year. Mooney and Billings won widespread sympathy, their defenders claiming that they had been denied justice because of antilabor prejudice. (In 1939, Governor Culbert L. Olson pardoned Mooney and commuted Billings' sentence to time served.)
In order to provide water for irrigation and the increased population, huge water-diversion projects were undertaken. The Los Angeles Aqueduct, extending 240 miles (386 km) from the Owens Valley to Los Angeles, was completed in 1913. It aroused intense, and sometimes violent, controversy because persons displaced by it believed they had been forced to sell their land for an unfair price. Hetch Hetchy Dam, part of a watersupply project for San Francisco, was completed in 1923. Hoover Dam, finished in 1936, was part of the Boulder Canyon Project, which brought water from the Colorado River to southern California.
There was an oil boom in the 1920's, when new petroleum deposits were found. Also in that decade, the population increased 65 per cent. After the Great Depression began in 1929, thousands of people migrated to California, large numbers of them impoverished farmers from the drought-stricken Plains states. Many of those who came sought work as migrant agricultural workers; lack of jobs and low pay caused more labor unrest. Not until World War II were there jobs for everyone.
During the war, rapid growth took place in California's defense-related industries, including aircraft construction, shipbuilding, textiles, and chemicals. In 1942 some 93,000 persons of Japanese descent, the majority of them American citizens, were removed from California to relocation camps, because of fear that they would commit espionage or sabotage. Many of them lost their homes, businesses, and possessions in the relocation.
Since World War II
Under the leadership of Governor Earl Warren (1943-53), California experienced a major economic boom. There was the largest influx of newcomers in the state's history, leading to overcrowding of schools and shortages of housing and water. In 1952, one of California's United States senators, Richard M. Nixon, was elected vice president. Also that year, Governor Warren became Chief Justice of the United States.
In the 1960's, San Francisco became a center of the "hippie” subculture, made up of young people who rejected mainstream values and embraced communal living, used drugs freely, and ignored conventional restraints on sexual activity. Students at the University of California at Berkeley began a "Free Speech Movement” in 1964 to protest limitations on political activity on campus. California students also demanded a greater voice in university administration.
In 1965, riots erupted in Watts, a predominantly black area in Los Angeles. The violence was attributed to the residents' frustration with poor housing, unemployment, and racial discrimination. That same year, Mexican-American migrant workers in California's grape industry began a five-year strike, under the leadership of Cesar Chavez, for better wages and living conditions.
Also in the 1960's, California surpassed New York as the nation's most populous state. In 1965, California's Indians were granted some $30 million by the federal government to compensate them for lands taken from them in the 19th century. Ronald Reagan was elected governor in 1967, and pledged to reduce state spending and cut taxes. Former senator Nixon was elected President of the United States in 1968.
In the 1960's and 1970's, California became a center of the semiconductor and computer industries. The Santa Clara Valley, south of San Francisco, came to be nicknamed Silicon Valley because numerous companies manufacturing silicon computer chips were located there. In the 1970's, California, like many other states, had problems with pollution, unemployment, and inflation. In 1978 a voter initiative enacted a constitutional amendment, popularly called Proposition 13, that drastically cut property taxes and limited the growth of government.
During the recession of 1981-82, the state suffered high unemployment and a sharp drop in tax revenues. Widespread crop and property damage resulted from severe winter storms during 1981-83. In 1989 the northern part of the state was hit by a major earthquake that caused extensive property damage and about 50 deaths.
Partly due to the large numbers of immigrants from the Eastern Hemisphere and Central America, the state's population continued to have great growth rates. In 1986, a referendum was approved to make English the state's official language, and in 1994 another was passed to block illegal immigrants from receiving certain social services. The 1994 implementation was blocked by lawsuits and abandoned in 1999; Governor Gray Davis said most of the proposition was covered by 1996 federal immigration laws.
The recession of the early 1990's led to severe financial problems for the state government. In April, 1992, race riots in the Los Angeles area claimed some 50 lives and caused substantial property damage. The Los Angeles area was also struck by major earthquakes in June 1992 and January 1994.
California deregulated its electric industry in 1996. In 2001 the state experienced blackouts and high energy costs. California held a special election in 2003 in which Governor Gray Davis was recalled—that is, removed from office. Arnold Schwarzenegger, a motion picture actor, was elected in his place. In 2006, Schwarzenegger was elected to a full term as governor. | http://history.howstuffworks.com/american-history/history-of-california.htm/printable | 13 |
44 | Overview | In this lesson, students consider the Deepwater Horizon oil spill in the Gulf of Mexico and related cleanup efforts. They then design and execute experiments to learn more about the effects of oil spills, and apply their findings to the coastal communities in the gulf region. Finally, they explore the economic and political impacts of the oil spill as well as the technological progress toward stopping the leak.
Materials | Copies of the lab experiment handout, computers with Internet access, containers, vegetable oil, mineral oil, molasses, mud, sand, gravel, paper towels, fabrics, plastic wrap, coffee filters, sponges, detergents and soaps, balances, graduated cylinders and other materials for student experiments
Warm-up | Prepare to have students view or provide online access to the New York Times interactive feature “Tracking the Oil Spill.” (Be sure to hit “play” and/or use the slide bar to show how the oil slick has moved and grown.) You may also wish to provide this graphic summary of the spill.
Using the graphics, lead a discussion about the Deepwater Horizon oil spill and the threatened coastlines in the gulf. Ask: What did you hear about the spill before? What information is shown in the graphic? How does it help you understand the spill? Based on what you see here, how do you think coastlines will be affected versus underwater areas? What environmental factors do you think are affecting the spread of the oil? How are crews trying to contain or clean up the oil in the gulf?
You might also discuss the ecosystem of the area by asking, What does the gulf coast look like? Why are marshlands, oyster beds and breeding grounds important? Would you expect this area to be rocky, sandy or muddy? How does the type of sediment in the area affect the way the oil will interact with this environment now and in years to come? How does it affect the best practices for clean-up efforts?
Explain that scientists, fishermen and environmentalists are closely watching and tracking the flow of oil into and near the sensitive coastal regions because of the area’s marshlands, oyster beds and estuaries. Discuss how ocean currents, weather and winds are affecting the path of the oil.
You might also touch on the latest progress in the containment process, such as efforts to burn off the oil and the use of chemical dispersants. [UPDATE: After the failure of the containment dome, BP is now employing a “top kill” technique, which can be used in conjunction with a method known as a “junk shot.” The E.P.A. has asked BP to stop using the dispersant Corexit and worries over the safety of chemical dispersants rise. Despite booms in many places, oil reached parts of the coast and cleanup efforts continue. On May 16, BP was able to insert a narrow tube into the leak, allowing it to divert a small percentage of the leaking oil to a drill ship on the surface.]
Related | In the article “On Defensive, BP Readies Dome to Contain Spill,” Ian Urbina, Justin Gillis and Clifford Krauss investigate the causes of the explosion and oil spill in the Gulf of Mexico and BP’s efforts to stanch the leaks:
BP spent Monday preparing possible solutions to stem oil leaks from an undersea well off the Louisiana coast, and fending off new accusations about its role in the widening environmental disaster.
Crews were building a containment dome, a 4-story, 70-ton structure that the company plans to lower into place over one of the three leaks to catch the escaping oil and allow it to be pumped to the surface.
The company was also planning to install a shutoff valve at the site of one of the leaks on Monday, but the seas were too rough, delaying that effort. Heavy winds damaged miles of floating booms laid out in coastal waters to protect the shoreline from the spreading oil slick, which appeared to be drifting toward the Alabama and Florida coasts and the Chandeleur Islands off Louisiana’s southern tip.
Read the entire article with your class, using the questions below. Depending on prior student knowledge, you may wish to supplement or substitute this article with the overview from the Gulf of Mexico Oil Spill (2010) Times Topics page or the latest news from the Lede. [UPDATE: Students tracking the efforts to stop the oil leak might read the article “BP Resumes Work to Plug Oil Leak After Facing Setback,” on the “top kill” technique employed after the failure of the containment dome.]
Questions | For reading comprehension and discussion:
- What is a containment dome and how does it work? What other methods is BP using to stop the oil leaks?
- What is the blowout preventer and what problems did it have?
- What groups have a vested interest in this problem and how is the oil spill, and the extent of damage before a solution is found, likely to affect each group?
- In your opinion, what changes should be made to ensure that this type of problem does not happen again?
From The Learning Network
- Times Topics: Gulf of Mexico Oil Spill (2010)
- Times Topics: Oil Spills
- News Analysis: Gulf Oil Spill Is Bad, but How Bad?
Around the Web
Activity | Explain to students that they will design a lab experiment to learn more about oil spills and subsequent cleanup efforts. Split the class into lab partners or small groups and give each group our lab experiment handout (PDF).
Begin by having the class or small groups brainstorm a list of questions they might ask, such as “Why does oil float on water?” or “What factors influence the formation of mousse in sea water?” or “How do different types of oil interact with various sediment types?”
Another idea is for students to simulate multiple cleanup methods (Safety note: Do not allow students to simulate burning the oil), then draw conclusions about their effectiveness through observations. [UPDATE: In light of environmentalists claiming that cleanup technology, namely booms, skimmers and chemical dispersants, has not kept pace with advancements in drilling, students might collaborate to design a new method to soak up or contain oil from water and soil sediments. Inspiration might come from the reader’s comments in response to this lesson plan, this invention by an Alabama hairstylist or their own imaginations. ]
After students design, implement and complete their lab, have them apply what they have learned to the specific problems in the Gulf of Mexico.
For example, students investigating how oil interacts with beach sediment might use data from the USGS Coastal and Marine Geology Program’s U.S. Gulf of Mexico Internet Map Server to learn more about the sediment types in the Gulf of Mexico. To use this tool, have them remove the default layers by unclicking the boxes in the layers list on the right side of the page, then click the box “Surficial sediments” to display this layer. In the tools box on the left side, students can click the “Toggle Legend” tool to display the key for the sediment layer shown. Students may also use the Zoom In tool to narrow the visible area to the coastal regions of Louisiana, Mississippi and Alabama. The “More Info” and “Data Catalog” buttons can be used to obtain more information about this data layer.
As necessary or applicable, have students research relevant background information, such as the composition of the oil leaking in the gulf, information of marsh habitats, wildlife in the gulf area, or past environmental damage to similar areas and breeding grounds.
Provide time for lab groups to share their findings with the whole class and discuss what all the experiments suggest about an oil spill like this one. What questions remain?
Have students continue to follow the news to compare their laboratory conclusions to the real-life developments in the gulf.
Going further | Individually or in small groups, students supplement their understanding by looking at this oil spill from a different perspective. [Update: We now have a collection of Times and Learning Network activities on this topic here.]
For example, might see how the oil spill will likely affect the economic livelihood of many in the gulf area and the perseverance of many who were already affected by Hurricane Katrina.
Standards | From McREL, for grades 6-12:
11. Understands the interrelationship of the building trades and society
19. Understands the interrelationship of manufacturing and society
8. Understands the characteristics of ecosystems on Earth’s surface
14. Understands how human actions modify the physical environment
18. Understands global development and environmental issues
2. Knows environmental and external factors that affect individual and community health
6. Understands relationships among organisms and their physical environment
11. Understands the nature of scientific knowledge
13. Understands the scientific enterprise
3. Understands the relationships among science, technology, society, and the individual
44. Understands the search for community, stability, and peace in an interdependent world
46. Understands long-term changes and recurring patterns in world history | http://learning.blogs.nytimes.com/2010/05/05/the-drill-on-the-spill-learning-about-the-gulf-oil-leak-in-the-lab/?nl=learning&emc=a1 | 13 |
19 | In Economics, use of the word ‘demand’ is made to show the relationship between the prices of a commodity and the amounts of the commodity which consumers want to purchase at those price.
Definition of Demand:
Hibdon defines, “Demand means the various quantities of goods that would be purchased per time period at different prices in a given market.”
Bober defines, “By demand we mean the various quantities of given commodity or service which consumers would buy in one market in a given period of time at various prices, or at various incomes, or at various prices of related goods.”
Demand for product implies:
a) desires to acquire it,
b) willingness to pay for it, and
c) Ability to pay for it.
All three must be checked to identify and establish demand. For example : A poor man’s desires to stay in a five-star hotel room and his willingness to pay rent for that room is not ‘demand’, because he lacks the necessary purchasing power; so it is merely his wishful thinking. Similarly, a miser’s desire for and his ability to pay for a car is not ‘demand’, because he does not have the necessary willingness to pay for a car. One may also come across a well-established person who processes both the willingness and the ability to pay for higher education. But he has really no desire to have it, he pays the fees for a regular cause, and eventually does not attend his classes. Thus, in an economics sense, he does not have a ‘demand’ for higher education degree/diploma.
It should also be noted that the demand for a product–-a commodity or a service–has no meaning unless it is stated with specific reference to the time, its price, price of is related goods, consumers’ income and tastes etc. This is because demand, as is used in Economics, varies with fluctuations in these factors.
To say that demand for an Atlas cycle in India is 60,000 is not meaningful unless it is stated in terms of the year, say 1983 when an Atlas cycle’s price was around Rs. 800, competing cycle’s prices were around the same, a scooter’s prices was around Rs. 5,000. In 1984, the demand for an Atlas cycle could be different if any of the above factors happened to be different. For example, instead of domestic (Indian), market, one may be interested in foreign (abroad) market as well. Naturally the demand estimate will be different. Furthermore, it should be noted that a commodity is defined with reference to its particular quality/brand; if its quality/brand changes, it can be deemed as another commodity.
To sum up, we can say that the demand for a product is the desire for that product backed by willingness as well as ability to pay for it. It is always defined with reference to a particular time, place, price and given values of other variables on which it depends.
Demand Function and Demand Curve
Demand function is a comprehensive formulation which specifies the factors that influence the demand for the product. What can be those factors which affect the demand?
Dx = D (Px, Py, Pz, B, W, A, E, T, U)
Here Dx, stands for demand for item x (say, a car)
Px, its own price (of the car)
Py, the price of its substitutes (other brands/models)
Pz, the price of its complements (like petrol)
B, the income (budget) of the purchaser (user/consumer)
W, the wealth of the purchaser
A, the advertisement for the product (car)
E, the price expectation of the user
T, taste or preferences of user
U, all other factors.
Briefly we can state the impact of these determinants, as we observe in normal circumstances:
i) Demand for X is inversely related to its own price. As price rises, the demand tends to fall and vice versa.
ii) The demand for X is also influenced by its related price—of goods related to X. For example, if Y is a substitute of X, then as the price of Y goes up, the demand for X also tends to increase, and vice versa. In the same way, if Z goes up and, therefore, the demand for X tends to go up.
iii) The demand for X is also sensitive to price expectation of the consumer; but here, much would depend on the psychology of the consumer; there may not be any definite relation.
This is speculative demand. When the price of a share is expected to go up, some people may buy more of it in their attempt to make future gains; others may buy less of it, rather may dispose it off, to make some immediate gain. Thus the price expectation effect on demand is not certain.
iv) The income (budget position) of the consumer is another important influence on demand. As income (real purchasing capacity) goes up, people buy more of ‘normal goods’ and less of ‘inferior goods’. Thus income effect on demand may be positive as well as negative. The demand of a person (or a household) may be influenced not only by the level of his own absolute income, but also by relative income—his income relative to his neighbour’s income and his purchase pattern. Thus a household may demand a new set of furniture, because his neighbour has recently renovated his old set of furniture. This is called ‘demonstration effect’.
v) Past income or accumulated savings out of that income and expected future income, its discounted value along with the present income—permanent and transitory—all together determine the nominal stock of wealth of a person. To this, you may also add his current stock of assets and other forms of physical capital; finally adjust this to price level. The real wealth of the consumer, thus computed, will have an influence on his demand. A person may pool all his resources to construct the ground floor of his house. If he has access to some additional resources, he may then construct the first floor rather than buying a flat. Similarly one who has a color TV (rather than a black-and-white one) may demand a V.C.R./V.C.P. This is regarded as the real wealth effect on demand.
vi) Advertisement also affects demand. It is observed that the sales revenue of a firm increases in response to advertisement up to a point. This is promotional effect on demand (sales). Thus
vii) Tastes, preferences, and habits of individuals have a decisive influence on their pattern of demand. Sometimes, even social pressure—customs, traditions and conventions exercise a strong influence on demand. These socio-psychological determinants of demand often defy any theoretical construction; these are non-economic and non-market factors—highly indeterminate. In some cases, the individual reveal his choice (demand) preferences; in some cases, his choice may be strongly ordered. We will revisit these concepts in the next unit.
You may now note that there are various determinants of demand, which may be explicitly taken care of in the form of a demand function. By contrast, a demand curve only considers the price-demand relation, other things (factors) remaining the same. This relationship can be illustrated in the form of a table called demand schedule and the data from the table may be given a diagrammatic representation in the form of a curve. In other words, a generalized demand function is a multivariate function whereas the demand curve is a single variable demand function.
Dx = D(Px)
In the slope—intercept from, the demand curve which may be stated as
Dx = α + β Px, where α is the intercept term and β the slope which is negative because of inverse relationship between Dx and Px.
Suppose, β = (-) 0.5, and α = 10
Then the demand function is : D=10-0.5P | http://www.mbaknol.com/managerial-economics/concept-of-demand-in-managerial-economics/ | 13 |
17 | You can use formulas and functions in lists or libraries to calculate data in a variety of ways. By adding a calculated column to a list or library, you can create a formula that includes data from other columns and performs functions to calculate dates and times, to perform mathematical equations, or to manipulate text. For example, on a tasks list, you can use a column to calculate the number of days it takes to complete each task, based on the Start Date and Date Completed columns.
Note This article describes the basic concepts related to using formulas and functions. For specific information about a particular function, see the article about that function.
Formulas are equations that perform calculations on values in a list or library. A formula starts with an equal sign (=). For example, the following formula multiplies 2 by 3 and then adds 5 to the result.
You can use a formula in a calculated column and to calculate default values for a column. A formula can contain functions (function: A prewritten formula that takes a value or values, performs an operation, and returns a value or values. Use functions to simplify and shorten formulas on a worksheet, especially those that perform lengthy or complex calculations.), column references, operators (operator: A sign or symbol that specifies the type of calculation to perform within an expression. There are mathematical, comparison, logical, and reference operators.), and constants (constant: A value that is not calculated and, therefore, does not change. For example, the number 210, and the text "Quarterly Earnings" are constants. An expression, or a value resulting from an expression, is not a constant.), as in the following example.
||The PI() function returns the value of pi: 3.141592654.
|Reference (or column name)
||[Result] represents the value in the Result column for the current row.
||Numbers or text values entered directly into a formula, such as 2.
||The * (asterisk) operator multiplies, and the ^ (caret) operator raises a number to a power.
A formula might use one or more of the elements from the previous table. Here are some examples of formulas (in order of complexity).
Simple formulas (such as =128+345)
The following formulas contain constants and operators.
||Adds 128 and 345
Formulas that contain column references (such as =[Revenue] >[Cost])
The following formulas refer to other columns in the same list or library.
||Uses the value in the Revenue column.
||10% of the value in the Revenue column.
|=[Revenue] > [Cost]
||Returns Yes if the value in the Revenue column is greater than the value in the Cost column.
Formulas that call functions (such as =AVERAGE(1, 2, 3, 4, 5))
The following formulas call built-in functions.
|=AVERAGE(1, 2, 3, 4, 5)
||Returns the average of a set of values.
|=MAX([Q1], [Q2], [Q3], [Q4])
||Returns the largest value in a set of values.
|=IF([Cost]>[Revenue], "Not OK", "OK")
||Returns Not OK if cost is greater than revenue. Else, returns OK.
||Returns the day part of a date. This formula returns the number 15.
Formulas with nested functions (such as =SUM(IF([A]>[B], [A]-[B], 10), [C]))
The following formulas specify one or more functions as function arguments.
|=SUM(IF([A]>[B], [A]-[B], 10), [C])
The IF function returns the difference between the values in columns A and B, or 10.
The SUM function adds the return value of the IF function and the value in column C.
The PI function returns the number 3.141592654.
The DEGREES function converts a value specified in radians to degrees. This formula returns the value 180.
The FIND function searches for the string BD in Column1 and returns the starting position of the string. It returns an error value if the string is not found.
The ISNUMBER function returns Yes if the FIND function returned a numeric value. Else, it returns No.
Top of Page
Functions are predefined formulas that perform calculations by using specific values, called arguments, in a particular order, or structure. Functions can be used to perform simple or complex calculations. For example, the following instance of the ROUND function rounds off a number in the Cost column to two decimal places.
The following vocabulary is helpful when you are learning functions and formulas:
Structure The structure of a function begins with an equal sign (=), followed by the function name, an opening parenthesis, the arguments for the function separated by commas, and a closing parenthesis.
Function name This is the name of a function that is supported by lists or libraries. Each function takes a specific number of arguments, processes them, and returns a value.
Arguments Arguments can be numbers, text, logical values such as True or False, or column references. The argument that you designate must produce a valid value for that argument. Arguments can also be constants, formulas, or other functions.
In certain cases, you may need to use a function as one of the arguments of another function. For example, the following formula uses a nested AVERAGE function and compares the result with the sum of two column values.
Valid returns When a function is used as an argument, it must return the same type of value that the argument uses. For example, if the argument uses Yes or No, then the nested function must return Yes or No. If it doesn't, the list or library displays a #VALUE! error value.
Nesting level limits A formula can contain up to eight levels of nested functions. When Function B is used as an argument in Function A, Function B is a second-level function. In the example above for instance, the SUM function is a second-level function because it is an argument of the AVERAGE function. A function nested within the SUM function would be a third-level function, and so on.
- Lists and libraries do not support the RAND and NOW functions.
- The TODAY and ME functions are not supported in calculated columns but are supported in the default value setting of a column.
Top of Page
Using column references in a formula
A reference identifies a cell in the current row and indicates to a list or library where to search for the values or data that you want to use in a formula. For example, [Cost] references the value in the Cost column in the current row. If the Cost column has the value of 100 for the current row, then =[Cost]*3 returns 300.
With references, you can use the data that is contained in different columns of a list or library in one or more formulas. Columns of the following data types can be referenced in a formula: single line of text, number, currency, date and time, choice, yes/no, and calculated.
You use the display name of the column to reference it in a formula. If the name includes a space or a special character, you must enclose the name in square brackets ([ ]). References are not case-sensitive. For example, you can reference the Unit Price column in a formula as [Unit Price] or [unit price].
- You cannot reference a value in a row other than the current row.
- You cannot reference a value in another list or library.
- You cannot reference the ID of a row for a newly inserted row. The ID does not yet exist when the calculation is performed.
- You cannot reference another column in a formula that creates a default value for a column.
Top of Page
Using constants in a formula
A constant is a value that is not calculated. For example, the date 10/9/2008, the number 210, and the text "Quarterly Earnings" are all constants. Constants can be of the following data types:
- String (Example: =[Last Name] = "Smith")
String constants are enclosed in quotation marks and can include up to 255 characters.
- Number (Example: =[Cost] >= 29.99)
Numeric constants can include decimal places and can be positive or negative.
- Date (Example: =[Date] > DATE(2007,7,1))
Date constants require the use of the DATE(year,month,day) function.
- Boolean (Example: =IF([Cost]>[Revenue], "Loss", "No Loss")
Yes and No are Boolean constants. You can use them in conditional expressions. In the above example, if Cost is greater than Revenue, the IF function returns Yes, and the formula returns the string "Loss". If Cost is equal to or less than Revenue, the function returns No, and the formula returns the string "No Loss".
Top of Page
Using calculation operators in a formula
Operators specify the type of calculation that you want to perform on the elements of a formula. Lists and libraries support three different types of calculation operators: arithmetic, comparison, and text.
Use the following arithmetic operators to perform basic mathematical operations such as addition, subtraction, or multiplication; to combine numbers; or to produce numeric results.
|+ (plus sign)
|– (minus sign)
|/ (forward slash)
|% (percent sign)
You can compare two values with the following operators. When two values are compared by using these operators, the result is a logical value of Yes or No.
|= (equal sign)
||Equal to (A=B)
|> (greater than sign)
||Greater than (A>B)
|< (less than sign)
||Less than (A<B)
|>= (greater than or equal to sign)
||Greater than or equal to (A>=B)
|<= (less than or equal to sign)
||Less than or equal to (A<=B)
|<> (not equal to sign)
||Not equal to (A<>B)
Use the ampersand (&) to join, or concatenate, one or more text strings to produce a single piece of text.
||Connects, or concatenates, two values to produce one continuous text value ("North"&"wind")
Order in which a list or library performs operations in a formula
Formulas calculate values in a specific order. A formula might begin with an equal sign (=). Following the equal sign are the elements to be calculated (the operands), which are separated by calculation operators. Lists and libraries calculate the formula from left to right, according to a specific order for each operator in the formula.
If you combine several operators in a single formula, lists and libraries perform the operations in the order shown in the following table. If a formula contains operators with the same precedence — for example, if a formula contains both a multiplication operator and a division operator — lists and libraries evaluate the operators from left to right.
||Negation (as in –1)
|* and /
||Multiplication and division
|+ and –
||Addition and subtraction
||Concatenation (connects two strings of text)
|= < > <= >= <>
Use of parentheses
To change the order of evaluation, enclose in parentheses the part of the formula that is to be calculated first. For example, the following formula produces 11 because a list or library calculates multiplication before addition. The formula multiplies 2 by 3 and then adds 5 to the result.
In contrast, if you use parentheses to change the syntax, the list or library adds 5 and 2 together and then multiplies the result by 3 to produce 21.
In the example below, the parentheses around the first part of the formula force the list or library to calculate [Cost]+25 first and then divide the result by the sum of the values in columns EC1 and EC2.
Top of Page | http://office.microsoft.com/en-us/windows-sharepoint-services-help/introduction-to-data-calculations-HA010121588.aspx | 13 |
34 | Fourteenth Amendment to the United States Constitution
|United States of America|
This article is part of the series:
|Original text of the Constitution|
|Amendments to the Constitution|
|Full text of the Constitution|
Other countries · Law Portal
The Fourteenth Amendment (Amendment XIV) to the United States Constitution was adopted on July 9, 1868, as one of the Reconstruction Amendments. Its first section, which has frequently been the subject of lawsuits, includes several clauses: the Citizenship Clause, Privileges or Immunities Clause, Due Process Clause, and Equal Protection Clause.
The Citizenship Clause provides a broad definition of citizenship, overruling the Supreme Court's decision in Dred Scott v. Sandford (1857), which had held that Americans descended from African slaves could not be citizens of the United States. The Citizenship Clause is followed by the Privileges or Immunities Clause, which has been interpreted in such a way that it does very little.
The Due Process Clause prohibits state and local government officials from depriving persons of life, liberty, or property without legislative authorization. This clause has also been used by the federal judiciary to make most of the Bill of Rights applicable to the states, as well as to recognize substantive and procedural requirements that state laws must satisfy.
The Equal Protection Clause requires each state to provide equal protection under the law to all people within its jurisdiction. This clause was the basis for Brown v. Board of Education (1954), the Supreme Court decision that precipitated the dismantling of racial segregation in the United States, and for many other decisions rejecting irrational or unnecessary discrimination against people belonging to various groups. The second, third, and fourth sections of the amendment are seldom, if ever, litigated; the fifth section gives Congress enforcement power.
Section 1. All persons born or naturalized in the United States, and subject to the jurisdiction thereof, are citizens of the United States and of the State wherein they reside. No State shall make or enforce any law which shall abridge the privileges or immunities of citizens of the United States; nor shall any State deprive any person of life, liberty, or property, without due process of law; nor deny to any person within its jurisdiction the equal protection of the laws.
Section 2. Representatives shall be apportioned among the several States according to their respective numbers, counting the whole number of persons in each State, excluding Indians not taxed. But when the right to vote at any election for the choice of electors for President and Vice President of the United States, Representatives in Congress, the Executive and Judicial officers of a State, or the members of the Legislature thereof, is denied to any of the male inhabitants of such State, being twenty-one years of age, and citizens of the United States, or in any way abridged, except for participation in rebellion, or other crime, the basis of representation therein shall be reduced in the proportion which the number of such male citizens shall bear to the whole number of male citizens twenty-one years of age in such State.
Section 3. No person shall be a Senator or Representative in Congress, or elector of President and Vice President, or hold any office, civil or military, under the United States, or under any State, who, having previously taken an oath, as a member of Congress, or as an officer of the United States, or as a member of any State legislature, or as an executive or judicial officer of any State, to support the Constitution of the United States, shall have engaged in insurrection or rebellion against the same, or given aid or comfort to the enemies thereof. But Congress may, by a vote of two-thirds of each House, remove such disability.
Section 4. The validity of the public debt of the United States, authorized by law, including debts incurred for payment of pensions and bounties for services in suppressing insurrection or rebellion, shall not be questioned. But neither the United States nor any State shall assume or pay any debt or obligation incurred in aid of insurrection or rebellion against the United States, or any claim for the loss or emancipation of any slave; but all such debts, obligations and claims shall be held illegal and void.Section 5. The Congress shall have power to enforce, by appropriate legislation, the provisions of this article.
Citizenship and civil rights
Section 1 formally defines United States citizenship and protects individual civil and political rights from being abridged or denied by any state. In effect, it overruled the Supreme Court's Dred Scott decision that black people were not citizens and could not become citizens, nor enjoy the benefits of citizenship. The Civil Rights Act of 1866 had granted citizenship to all persons born in the United States if they were not subject to a foreign power. The framers of the Fourteenth Amendment wanted this principle enshrined into the Constitution to protect the new Civil Rights Act from being declared unconstitutional by the Supreme Court and to prevent a future Congress from altering it by a mere majority vote.
This section was also in response to the Black Codes that southern states had passed in the wake of the Thirteenth Amendment, which abolished slavery in the United States. The Black Codes attempted to return former slaves to something like their former condition by, among other things, restricting their movement, forcing them to enter into year-long labor contracts, prohibiting them from owning firearms, and by preventing them from suing or testifying in court.
Finally, this section was in response to violence against black people within the southern states. A Joint Committee on Reconstruction found that only a Constitutional amendment could protect black people's rights and welfare within those states. This section has been the most frequently litigated part of the amendment, and this amendment has been the most frequently litigated part of the Constitution.
There are varying interpretations of the original intent of Congress, based on statements made during the congressional debate over the amendment. During the original debate over the amendment Senator Jacob M. Howard of Michigan—the author of the Citizenship Clause—described the clause as having the same content, despite different wording, as the earlier Civil Rights Act of 1866, namely, that it excludes Native Americans who maintain their tribal ties and "persons born in the United States who are foreigners, aliens, who belong to the families of ambassadors or foreign ministers." According to historian Glenn W. LaFantasie of Western Kentucky University, "A good number of his fellow senators supported his view of the citizenship clause." Others also agreed that the children of ambassadors and foreign ministers were to be excluded. However, concerning children born in the United States to parents who are not citizens of the United States (and not foreign diplomats), three Senators, including Senate Judiciary Committee Chairman Lyman Trumbull, the author of the Civil Rights Act, as well as President Andrew Johnson, asserted that both the Civil Rights Act and the Fourteenth Amendment would confer citizenship on them at birth, and no Senator offered a contrary opinion.
Senator James Rood Doolittle of Wisconsin asserted that all Native Americans were subject to United States jurisdiction, so that the phrase "Indians not taxed" would be preferable, but Trumbull and Howard disputed this, arguing that the federal government did not have full jurisdiction over Native American tribes, which govern themselves and make treaties with the United States. In Elk v. Wilkins (1884), the clause's meaning was tested regarding whether birth in the United States automatically extended national citizenship. The Supreme Court held that Native Americans who voluntarily quit their tribes did not automatically gain national citizenship.
The clause's meaning was tested again in United States v. Wong Kim Ark (1898). The Supreme Court held that under the Fourteenth Amendment, a man born within the United States to Chinese citizens who have a permanent domicile and residence in the United States and are carrying on business in the United States—and whose parents were not employed in a diplomatic or other official capacity by a foreign power—was a citizen of the United States. Subsequent decisions have applied the principle to the children of foreign nationals of non-Chinese descent. In 2010, Republican Senators discussed revising the amendment's providing of birthright citizenship to reduce the practice of "birth tourism", in which a pregnant foreign national gives birth in the United States for purposes of the child's citizenship.
Loss of citizenship
Loss of national citizenship is possible only under the following circumstances:
- Fraud in the naturalization process. Technically, this is not loss of citizenship but rather a voiding of the purported naturalization and a declaration that the immigrant never was a citizen of the United States.
- Voluntary relinquishment of citizenship. This may be accomplished either through renunciation procedures specially established by the State Department or through other actions that demonstrate desire to give up national citizenship.
For much of the country's history, voluntary acquisition or exercise of a foreign citizenship was considered sufficient cause for revocation of national citizenship. This concept was enshrined in a series of treaties between the United States and other countries (the Bancroft Treaties). However, the Supreme Court repudiated this concept in Afroyim v. Rusk (1967), as well as Vance v. Terrazas (1980), holding that the Citizenship Clause of the Fourteenth Amendment barred the Congress from revoking citizenship.
Privileges or Immunities Clause
In the Slaughter-House Cases (1873), the Supreme Court ruled that the amendment's Privileges or Immunities Clause was limited to "privileges or immunities" granted to citizens by the federal government by virtue of national citizenship. The Court further held in the Civil Rights Cases (1883) that the amendment was limited to "state action" and, therefore, did not authorize the Congress to outlaw racial discrimination on the part of private individuals or organizations. Neither of these decisions has been overruled and they have been specifically reaffirmed several times.
Despite fundamentally differing views concerning the coverage of the Privileges or Immunities Clause of the Fourteenth Amendment, most notably expressed in the majority and dissenting opinions in the Slaughter-House Cases (1873), it has always been common ground that this Clause protects the third component of the right to travel. Writing for the majority in the Slaughter-House Cases, Justice Miller explained that one of the privileges conferred by this Clause "is that a citizen of the United States can, of his own volition, become a citizen of any State of the Union by a bona fide residence therein, with the same rights as other citizens of that State."
Due Process Clause
The Due Process Clause of the amendment protects both procedural due process—the guarantee of a fair legal process—and substantive due process—the guarantee that the fundamental rights of citizens will not be encroached on by government.
Beginning with Allgeyer v. Louisiana (1897), the Court interpreted the Due Process Clause as providing substantive protection to private contracts and thus prohibiting a variety of social and economic regulation, under what was referred to as "freedom of contract". Thus, the Court struck down a law decreeing maximum hours for workers in a bakery in Lochner v. New York (1905) and struck down a minimum wage law in Adkins v. Children's Hospital (1923). In Meyer v. Nebraska (1923), the Court stated that the "liberty" protected by the Due Process Clause
"[w]ithout doubt...denotes not merely freedom from bodily restraint but also the right of the individual to contract, to engage in any of the common occupations of life, to acquire useful knowledge, to marry, establish a home and bring up children, to worship God according to the dictates of his own conscience, and generally to enjoy those privileges long recognized at common law as essential to the orderly pursuit of happiness by free men."
However, the Court did uphold some economic regulation such as state prohibition laws (Mugler v. Kansas, 1887), laws declaring maximum hours for mine workers (Holden v. Hardy, 1898), laws declaring maximum hours for female workers (Muller v. Oregon, 1908), President Wilson's intervention in a railroad strike (Wilson v. New, 1917), as well as federal laws regulating narcotics (United States v. Doremus, 1919). The Court repudiated the "freedom of contract" line of cases in West Coast Hotel v. Parrish (1937).
By the 1960s, the Court had extended its interpretation of substantive due process to include rights and freedoms that are not specifically mentioned in the Constitution but that, according to the Court, extend or derive from existing rights. The Court has also significantly expanded the reach of procedural due process, requiring some sort of hearing before the government may terminate civil service employees, expel a student from public school, or cut off a welfare recipient's benefits.
The Due Process Clause is also the foundation of a constitutional right to privacy. The Court first ruled that privacy was protected by the Constitution in Griswold v. Connecticut (1965), which overturned a Connecticut law criminalizing birth control. While Justice William O. Douglas wrote for the majority that the right to privacy was found in the "penumbras" of the Bill of Rights, Justices Arthur Goldberg and John Marshall Harlan II wrote in concurring opinions that the "liberty" protected by the Due Process Clause included individual privacy.
The right to privacy became the basis for Roe v. Wade (1973), in which the Court invalidated a Texas law forbidding abortion except to save the mother's life. Like Goldberg and Harlan's dissents in Griswold, the majority opinion authored by Justice Harry A. Blackmun located the right to privacy in the Due Process Clause's protection of liberty. The decision disallowed many state and federal abortion restrictions, and became one of the most controversial in the Court's history. In Planned Parenthood v. Casey (1992), the Court decided that "the essential holding of Roe v. Wade should be retained and once again reaffirmed." In Lawrence v. Texas (2003), the Court found that a Texas law against same-sex sexual intercourse violated the right to privacy.
The Court has ruled that, in certain circumstances, the Due Process Clause requires a judge to recuse himself on account of concern of there being a conflict of interest. For example, in Caperton v. A.T. Massey Coal Co. (2009), the Court ruled that a justice of the Supreme Court of Appeals of West Virginia had to recuse himself from a case involving a major contributor to his campaign for election to that court.
While many state constitutions are modeled after the United States Constitution and federal laws, those state constitutions did not necessarily include provisions comparable to the Bill of Rights. In Barron v. Baltimore (1833), the Supreme Court unanimously ruled that the Bill of Rights restrained only the federal government, not the states. Under the Fourteenth Amendment, most provisions of the Bill of Rights have been held to apply to the states as well as the federal government, a process known as incorporation.
Whether this incorporation was intended by the amendment's framers, such as John Bingham, has been debated by legal historians. According to legal scholar Akhil Reed Amar, the framers and early supporters of the Fourteenth Amendment believed that it would ensure that the states would be required to recognize the same individual rights as the federal government; all of these rights were likely understood as falling within the "privileges or immunities" safeguarded by the amendment.
By the latter half of the 20th century, nearly all of the rights in the Bill of Rights had been applied to the states. The Supreme Court has held that the amendment's Due Process Clause incorporates all of the substantive protections of the First, Second, Fourth, Fifth (except for its Grand Jury Clause) and Sixth Amendments and the Cruel and Unusual Punishment Clause of the Eighth Amendment. While the Third Amendment has not been applied to the states by the Supreme Court, the Second Circuit ruled that it did apply to the states within that circuit's jurisdiction in Engblom v. Carey. The Seventh Amendment has been held not to be applicable to the states.
Equal Protection Clause
The Equal Protection Clause was added to deal with the lack of equal protection provided by law in states with Black Codes. Under Black Codes, blacks could not sue, give evidence, or be witnesses, and they received harsher degrees of punishment than whites. The clause mandates that individuals in similar situations be treated equally by state and federal laws.
"The court does not wish to hear argument on the question whether the provision in the Fourteenth Amendment to the Constitution, which forbids a State to deny to any person within its jurisdiction the equal protection of the laws, applies to these corporations. We are all of the opinion that it does."
This dictum, which established that corporations enjoyed personhood under the Equal Protection Clause, was repeatedly reaffirmed by later courts. It remained the predominant view throughout the twentieth century, though it was challenged in dissents by justices such as Hugo Black and William O. Douglas.
In the decades following the adoption of the Fourteenth Amendment, the Supreme Court overturned laws barring blacks from juries (Strauder v. West Virginia, 1880) or discriminating against Chinese Americans in the regulation of laundry businesses (Yick Wo v. Hopkins, 1886), as violations of the Equal Protection Clause. However, in Plessy v. Ferguson (1896), the Supreme Court held that the states could impose segregation so long as they provided similar facilities—the formation of the "separate but equal" doctrine.
The Court went even further in restricting the Equal Protection Clause in Berea College v. Kentucky (1908), holding that the states could force private actors to discriminate by prohibiting colleges from having both black and white students. By the early 20th century, the Equal Protection Clause had been eclipsed to the point that Justice Oliver Wendell Holmes, Jr. dismissed it as "the usual last resort of constitutional arguments."
The Court held to the "separate but equal" doctrine for more than fifty years, despite numerous cases in which the Court itself had found that the segregated facilities provided by the states were almost never equal, until Brown v. Board of Education (1954) reached the Court. In Brown the Court ruled that even if segregated black and white schools were of equal quality in facilities and teachers, segregation by itself was harmful to black students and so was unconstitutional. Brown met with a campaign of resistance from white Southerners, and for decades the federal courts attempted to enforce Brown's mandate against repeated attempts at circumvention. This resulted in the controversial desegregation busing decrees handed down by federal courts in various parts of the nation (see Milliken v. Bradley, 1974).
In Hernandez v. Texas (1954), the Court held that the Fourteenth Amendment protects those beyond the racial classes of white or "Negro" and extends to other racial and ethnic groups, such as Mexican Americans in this case. In the half-century following Brown, the Court extended the reach of the Equal Protection Clause to other historically disadvantaged groups, such as women and illegitimate children, although it has applied a somewhat less stringent standard than it has applied to governmental discrimination on the basis of race (United States v. Virginia, 1996; Levy v. Louisiana, 1968).
Reed v. Reed (1971), which struck down an Idaho probate law favoring men, was the first decision in which the Court ruled that arbitrary gender discrimination violated the Equal Protection Clause. In Craig v. Boren (1976), the Court ruled that statutory or administrative sex classifications had to be subjected to an intermediate standard of judicial review. Reed and Craig later served as precedents to strike down a number of state laws discriminating by gender.
Since Wesberry v. Sanders (1964) and Reynolds v. Sims (1964), the Supreme Court has interpreted the Equal Protection Clause as requiring the states to apportion their congressional districts and state legislative seats according to "one man, one vote". The Court has also struck down redistricting plans in which race was a key consideration. In Shaw v. Reno (1993), the Court prohibited a North Carolina plan aimed at creating majority-black districts to balance historic underrepresentation in the state's congressional delegations.
The Equal Protection Clause served as the basis for the decision in Bush v. Gore (2000), in which the Court ruled that no constitutionally valid recount of Florida's votes in the 2000 presidential election could be held within the needed deadline; the decision effectively secured Bush's victory in the disputed election. In League of United Latin American Citizens v. Perry (2006), the Court ruled that House Majority Leader Tom DeLay's Texas redistricting plan intentionally diluted the votes of Latinos and thus violated the Equal Protection Clause.
Apportionment of representation in House of Representatives
Section 2 altered the way each state's representation in the House of Representatives is determined. It counts all residents for apportionment, overriding Article I, Section 2, Clause 3 of the Constitution, which counted only three-fifths of each state's slave population.
Section 2 also reduces a state's apportionment if it wrongfully denies any adult male's right to vote, while explicitly permitting felony disenfranchisement. However, this provision was never enforced, and southern states continued to use pretexts to prevent many blacks from voting until the passage of Voting Rights Act in 1965. Because it protects the right to vote only of adult males, not adult females, this clause is the only provision of the US Constitution to discriminate explicitly on the basis of sex.
Some have argued that Section 2 was implicitly repealed by the Fifteenth Amendment, but the Supreme Court acknowledged the provisions of Section 2 in some later decisions. For example, in Richardson v. Ramirez (1974), the Court cited Section 2 as justification for the states disenfranchising felons.
Participants in rebellion
Section 3 prohibits the election or appointment to any federal or state office of any person who had held any of certain offices and then engaged in insurrection, rebellion or treason. However, a two-thirds vote by each House of the Congress can override this limitation. In 1898, the Congress enacted a general removal of Section 3's limitation. In 1975, the citizenship of Confederate general Robert E. Lee was restored by a joint congressional resolution, retroactive to June 13, 1865. In 1978, pursuant to Section 3, the Congress posthumously removed the service ban from Confederate president Jefferson Davis.
Section 3 was used to prevent Socialist Party of America member Victor L. Berger, convicted of violating the Espionage Act for his anti-militarist views, from taking his seat in the House of Representatives in 1919 and 1920.
Validity of public debt
Section 4 confirmed the legitimacy of all U.S. public debt appropriated by the Congress. It also confirmed that neither the United States nor any state would pay for the loss of slaves or debts that had been incurred by the Confederacy. For example, during the Civil War several British and French banks had lent large sums of money to the Confederacy to support its war against the Union. In Perry v. United States (1935), the Supreme Court ruled that under Section 4 voiding a United States bond "went beyond the congressional power."
The debt-ceiling crisis in 2011 raised the question of what powers Section 4 gives to the President, an issue that remains unsettled. Some, such as legal scholar Garrett Epps, fiscal expert Bruce Bartlett and Treasury Secretary Timothy Geithner, have argued that a debt ceiling may be unconstitutional and therefore void as long as it interferes with the duty of the government to pay interest on outstanding bonds and to make payments owed to pensioners (that is, Social Security recipients). Legal analyst Jeffrey Rosen has argued that Section 4 gives the President unilateral authority to raise or ignore the national debt ceiling, and that if challenged the Supreme Court would likely rule in favor of expanded executive power or dismiss the case altogether for lack of standing. Erwin Chemerinsky, professor and dean at University of California, Irvine School of Law, has argued that not even in a "dire financial emergency" could the President raise the debt ceiling as "there is no reasonable way to interpret the Constitution that [allows him to do so]".
Power of enforcement
Section 5 enables Congress to pass laws enforcing the Amendment's provisions. In the Civil Rights Cases (1883), the Supreme Court interpreted Section 5 narrowly, stating that "the legislation which Congress is authorized to adopt in this behalf is not general legislation upon the rights of the citizen, but corrective legislation"; in other words, Congress could only pass laws intended to combat violations of the rights enumerated in other sections. In a 1966 decision, Katzenbach v. Morgan, the Court upheld a section of the Voting Rights Act of 1965, ruling that Section 5 enabled Congress to act both remedially and prophylactically to protect rights enumerated in the amendment. In City of Boerne v. Flores (1997), the Court rejected Congress' ability to define or interpret constitutional rights via Section 5.
Proposal and ratification
The 39th United States Congress proposed the Fourteenth Amendment on June 13, 1866. Ratification of the Fourteenth Amendment was bitterly contested: all the Southern state legislatures, with the exception of Tennessee, refused to ratify the amendment. This refusal led to the passage of the Reconstruction Acts. Ignoring the existing state governments, military government was imposed until new civil governments were established and the Fourteenth Amendment was ratified.
On March 2, 1867, the Congress passed a law that required any formerly Confederate state to ratify the Fourteenth Amendment before "said State shall be declared entitled to representation in Congress".
By July 9, 1868, three-fourths of the states (28 of 37) ratified the amendment:
- Connecticut (June 25, 1866)
- New Hampshire (July 6, 1866)
- Tennessee (July 19, 1866)
- New Jersey (September 11, 1866)*
- Oregon (September 19, 1866)
- Vermont (October 30, 1866)
- Ohio (January 4, 1867)*
- New York (January 10, 1867)
- Kansas (January 11, 1867)
- Illinois (January 15, 1867)
- West Virginia (January 16, 1867)
- Michigan (January 16, 1867)
- Minnesota (January 16, 1867)
- Maine (January 19, 1867)
- Nevada (January 22, 1867)
- Indiana (January 23, 1867)
- Missouri (January 25, 1867)
- Rhode Island (February 7, 1867)
- Wisconsin (February 7, 1867)
- Pennsylvania (February 12, 1867)
- Massachusetts (March 20, 1867)
- Nebraska (June 15, 1867)
- Iowa (March 16, 1868)
- Arkansas (April 6, 1868, after having rejected it on December 17, 1866)
- Florida (June 9, 1868, after having rejected it on December 6, 1866)
- North Carolina (July 4, 1868, after having rejected it on December 14, 1866)
- Louisiana (July 9, 1868, after having rejected it on February 6, 1867)
- South Carolina (July 9, 1868, after having rejected it on December 20, 1866)
*Ohio passed a resolution that purported to withdraw its ratification on January 15, 1868. The New Jersey legislature also tried to rescind its ratification on February 20, 1868, citing procedural problems with the amendment's congressional passage, including that specific states were unlawfully denied representation in the House and the Senate at the time. The New Jersey governor had vetoed his state's withdrawal on March 5, and the legislature overrode the veto on March 24.
On July 20, 1868, Secretary of State William H. Seward certified that the amendment had become part of the Constitution if the rescissions were ineffective, and presuming also that the later ratifications by states whose governments had been reconstituted superseded the initial rejection of the prior state legislatures. The Congress responded on the following day, declaring that the amendment was part of the Constitution and ordering Seward to promulgate the amendment.
Meanwhile, two additional states had ratified the amendment:
- Alabama (July 13, 1868, the date the ratification was approved by the governor)
- Georgia (July 21, 1868, after having rejected it on November 9, 1866)
Thus, on July 28, Seward was able to certify unconditionally that the amendment was part of the Constitution without having to endorse the Congress's assertion that the withdrawals were ineffective.
After the Democrats won the legislative election in Oregon, they passed a rescission of the Unionist Party's previous adoption of the amendment. The rescission was ignored as too late, as it came on October 15, 1868. The amendment has since been ratified by all of the 37 states that were in the Union in 1868, including Ohio, New Jersey, and Oregon re-ratifying after their rescissions:
- Virginia (October 8, 1869, after having rejected it on January 9, 1867)
- Mississippi (January 17, 1870, after having rejected it on January 31, 1868)
- Texas (February 18, 1870, after having rejected it on October 27, 1866)
- Delaware (February 12, 1901, after having rejected it on February 7, 1867)
- Maryland (April 4, 1959, after having rejected it on March 23, 1867)
- California (March 18, 1959)
- Oregon (1973, after withdrawing it on October 15, 1868)
- Kentucky (May 6, 1976, after having rejected it on January 8, 1867)
- New Jersey (2003, after having rescinded on February 20, 1868)
- Ohio (2003, after having rescinded on January 15, 1868)
Selected Supreme Court cases
Privileges or immunities
Procedural due process/Incorporation
Substantive due process
Apportionment of Representatives
- 1974: Richardson v. Ramirez
Power of enforcement
- "Constitution of the United States: Amendments 11–27". National Archives and Records Administration. Archived from the original on June 11, 2013. Retrieved June 11, 2013.
- "Tsesis, Alexander, The Inalienable Core of Citizenship: From Dred Scott to the Rehnquist Court". Arizona State Law Journal, Vol. 39, 2008 (Ssrn.com). SSRN 1023809.
- McDonald v. Chicago, 130 S. Ct. 3020, 3060 (2010) ("This [clause] unambiguously overruled this Court's contrary holding in Dred Scott.")
- Goldstone 2011, pp. 23–24.
- Eric Foner, "The Second American Revolution", In These Times, September 1987; reprinted in Civil Rights Since 1787, ed. Jonathan Birnbaum & Clarence Taylor, NYU Press, 2000. ISBN 0814782493
- Duhaime, Lloyd. "Legal Definition of Black Code". duhaime.org. Retrieved March 25, 2009.
- Foner, Eric. Reconstruction. pp. 199–200. ISBN 0-8071-2234-3.
- "Finkelman, Paul, John Bingham and the Background to the Fourteenth Amendment". Akron Law Review, Vol. 36, No. 671, 2003 (Ssrn.com). April 2, 2009. SSRN 1120308.
- Harrell, David and Gaustad, Edwin. Unto A Good Land: A History Of The American People, Volume 1, p. 520 (Eerdmans Publishing, 2005): "The most important, and the one that has occasioned the most litigation over time as to its meaning and application, was Section One."
- Stephenson, D. The Waite Court: Justices, Rulings, and Legacy, p. 147 (ABC-CLIO, 2003).
- Messner, Emily. “Born in the U.S.A. (Part I)”, The Debate, The Washington Post (March 30, 2006).
- Robert Pear (August 7, 1996). "Citizenship Proposal Faces Obstacle in the Constitution". The New York Times.
- LaFantasie, Glenn (March 20, 2011) The erosion of the Civil War consensus, Salon
- Congressional Globe, 1st Session, 39th Congress, pt. 4, p. 2893 Senator Reverdy Johnson said in the debate: "Now, all this amendment provides is, that all persons born in the United States and not subject to some foreign Power--for that, no doubt, is the meaning of the committee who have brought the matter before us--shall be considered as citizens of the United States...If there are to be citizens of the United States entitled everywhere to the character of citizens of the United States, there should be some certain definition of what citizenship is, what has created the character of citizen as between himself and the United States, and the amendment says citizenship may depend upon birth, and I know of no better way to give rise to citizenship than the fact of birth within the territory of the United States, born of parents who at the time were subject to the authority of the United States."
- Congressional Globe, 1st Session, 39th Congress, pt. 4, p. 2897.
- Congressional Globe, 1st Session, 39th Congress, pt. 1, p. 572.
- Congressional Globe, 1st Session, 39th Congress, pt. 1, p. 498. The debate on the Civil Rights Act contained the following exchange:
Mr. Cowan: "I will ask whether it will not have the effect of naturalizing the children of Chinese and Gypsies born in this country?"
Mr. Trumbull: "Undoubtedly."
Mr. Trumbull: "I understand that under the naturalization laws the children who are born here of parents who have not been naturalized are citizens. This is the law, as I understand it, at the present time. Is not the child born in this country of German parents a citizen? I am afraid we have got very few citizens in some of the counties of good old Pennsylvania if the children born of German parents are not citizens."
Mr. Cowan: "The honorable Senator assumes that which is not the fact. The children of German parents are citizens; but Germans are not Chinese; Germans are not Australians, nor Hottentots, nor anything of the kind. That is the fallacy of his argument."
Mr. Trumbull: "If the Senator from Pennsylvania will show me in the law any distinction made between the children of German parents and the children of Asiatic parents, I may be able to appreciate the point which he makes; but the law makes no such distinction; and the child of an Asiatic is just as much of a citizen as the child of a European."
- Congressional Globe, 1st Session, 39th Congress, pt. 4, pp. 2891-2 During the debate on the Amendment, Senator John Conness of California declared, "The proposition before us, I will say, Mr. President, relates simply in that respect to the children begotten of Chinese parents in California, and it is proposed to declare that they shall be citizens. We have declared that by law [the Civil Rights Act]; now it is proposed to incorporate that same provision in the fundamental instrument of the nation. I am in favor of doing so. I voted for the proposition to declare that the children of all parentage, whatever, born in California, should be regarded and treated as citizens of the United States, entitled to equal Civil Rights with other citizens.".
- See veto message by President Andrew Johnson.
- Congressional Globe, 1st Session, 39th Congress, pt. 4, pp. 2890,2892-4,2896.
- Congressional Globe, 1st Session, 39th Congress, pt. 4, p. 2893. Trumbull, during the debate, said, "What do we [the committee reporting the clause] mean by 'subject to the jurisdiction of the United States'? Not owing allegiance to anybody else. That is what it means." He then proceeded to expound upon what he meant by "complete jurisdiction": "Can you sue a Navajoe Indian in court?...We make treaties with them, and therefore they are not subject to our jurisdiction.... If we want to control the Navajoes, or any other Indians of which the Senator from Wisconsin has spoken, how do we do it? Do we pass a law to control them? Are they subject to our jurisdiction in that sense?.... Would he [Sen. Doolittle] think of punishing them for instituting among themselves their own tribal regulations? Does the Government of the United States pretend to take jurisdiction of murders and robberies and other crimes committed by one Indian upon another?... It is only those persons who come completely within our jurisdiction, who are subject to our laws, that we think of making citizens."
- Congressional Globe, 1st Session, 39th Congress, pt. 4, p. 2895. Howard additionally stated the word jurisdiction meant "the same jurisdiction in extent and quality as applies to every citizen of the United States now" and that the U.S. possessed a "full and complete jurisdiction" over the person described in the amendment.
- Elk v. Wilkins, 112 U.S. 94 (1884)
- Urofsky, Melvin I.; Finkelman, Paul (2002). A March of Liberty: A Constitutional History of the United States 1 (2nd ed.). New York, NY: Oxford University Press. ISBN 0-19-512635-1.
- United States v. Wong Kim Ark 169 U.S. 649 (1898)
- Rodriguez, C.M. (2009). ""The Second Founding: The Citizenship Clause, Original Meaning, and the Egalitarian Unity of the Fourteenth Amendment" [PDF]". U. Pa. J. Const. L. 11: 1363–1475. Retrieved January 20, 2011.
- "14th Amendment: why birthright citizenship change 'can't be done'". Christian Science Monitor. August 10, 2010. Archived from the original on June 12, 2013. Retrieved June 12, 2013.
- U.S. Department of State (February 1, 2008). "Advice about Possible Loss of U.S. Citizenship and Dual Nationality". Retrieved April 17, 2009.
- For example, see Perez v. Brownell, 356 U.S. 44 (1958), overruled by Afroyim v. Rusk, 387 U.S. 253 (1967)
- Afroyim v. Rusk, 387 U.S. 253 (1967)
- Vance v. Terrazas, 444 U.S. 252 (1980)
- Slaughter-House Cases, 83 U.S. 36 (1873)
- Civil Rights Cases, 109 U.S. 3 (1883)
- e.g., United States v. Morrison, 529 U.S. 598 (2000)
- Saenz v. Roe, 526 U.S. 489 (1999)
- Gupta, Gayatri (2009). "Due process". In Folsom, W. Davis; Boulware, Rick. Encyclopedia of American Business. Infobase. p. 134.
- Allgeyer v. Louisiana, 169 U.S. 649 (1897)
- "Due Process of Law – Substantive Due Process". West's Encyclopedia of American Law. Thomson Gale. 1998.
- Lochner v. New York, 198 U.S. 45 (1905)
- Adkins v. Children's Hospital, 261 U.S. 525 (1923)
- Meyer v. Nebraska, 262 U.S. 390 (1923)
- "CRS Annotated Constitution". Cornell University Law School Legal Information Institute. Archived from the original on June 12, 2013. Retrieved June 12, 2013.
- Mugler v. Kansas, 123 U.S. 623 (1887)
- Holden v. Hardy, 169 U.S. 366 (1898)
- Muller v. Oregon, 208 U.S. 412 (1908)
- Wilson v. New, 243 U.S. 332 (1917)
- United States v. Doremus, 249 U.S. 86 (1919)
- West Coast Hotel v. Parrish, 300 U.S. 379 (1937)
- White, Bradford (2008). Procedural Due Process in Plain English. National Trust for Historic Preservation. ISBN 0-89133-573-0.
- See also Mathews v. Eldridge (1976).
- Griswold v. Connecticut, 381 U.S. 479 (1965)
- "Griswold v. Connecticut". Encyclopedia of the American Constitution. – via HighBeam Research (subscription required). January 1, 2000. Retrieved June 16, 2013.
- Roe v. Wade, 410 U.S. 113 (1973)
- "Roe v. Wade 410 U.S. 113 (1973) Doe v. Bolton 410 U.S. 179 (1973)". Encyclopedia of the American Constitution. – via HighBeam Research (subscription required). January 1, 2000. Retrieved June 16, 2013.
- Planned Parenthood v. Casey, 505 U.S. 833 (1992)
- Casey, 505 U.S. at 845-846.
- Lawrence v. Texas, 539 U.S. 558 (2003)
- Marc Spindelman (June 1, 2004). "Surviving Lawrence v. Texas". Michigan Law Review. – via HighBeam Research (subscription required). Retrieved June 16, 2013.
- Caperton v. A.T. Massey Coal Co., 556 U.S. ___ (2009)
- Jess Bravin and Kris Maher (June 8, 2009). "Justices Set New Standard for Recusals". The Wall Street Journal. Retrieved June 9, 2009.
- Barron v. Baltimore, 32 U.S. 243 (1833)
- Leonard W. Levy. "Barron v. City of Baltimore 7 Peters 243 (1833)". Encyclopedia of the American Constitution. – via HighBeam Research (subscription required). Retrieved June 13, 2013.
- Foster, James C. (2006). "Bingham, John Armor". In Finkleman, Paul. Encyclopedia of American Civil Liberties. CRC Press. p. 145.
- Amar, Akhil Reed (1992). "The Bill of Rights and the Fourteenth Amendment". Yale Law Journal (The Yale Law Journal, Vol. 101, No. 6) 101 (6): 1193–1284. doi:10.2307/796923. JSTOR 796923.
- "Duncan v. Louisiana (Mr. Justice Black, joined by Mr. Justice Douglas, concurring)". Cornell Law School – Legal Information Institute. May 20, 1968. Retrieved April 26, 2009.
- Levy, Leonard (1970). Fourteenth Amendment and the Bill of Rights: The Incorporation Theory (American Constitutional and Legal History Series). Da Capo Press. ISBN 0-306-70029-8.
- 677 F.2d 957 (1982)
- "Minneapolis & St. Louis R. Co. v. Bombolis (1916)". Supreme.justia.com. May 22, 1916. Retrieved August 1, 2010.
- Goldstone 2011, pp. 20, 23–24.
- Failinger, Marie (2009). "Equal protection of the laws". In Schultz, David Andrew. The Encyclopedia of American Law. Infobase. pp. 152–53.
- Johnson, John W. (January 1, 2001). Historic U.S. Court Cases: An Encyclopedia. Routledge. pp. 446–47. ISBN 978-0-415-93755-9. Retrieved June 13, 2013.
- Vile, John R., ed. (2003). "Corporations". Encyclopedia of Constitutional Amendments, Proposed Amendments, and Amending Issues: 1789 - 2002. ABC-CLIO. p. 116.
- Strauder v. West Virginia, 100 U.S. 303 (1880)
- Yick Wo v. Hopkins, 118 U.S. 356 (1886)
- Plessy v. Ferguson, 163 U.S. 537 (1896)
- Abrams, Eve (February 12, 2009). "Plessy/Ferguson plaque dedicated". WWNO (University New Orleans Public Radio). Retrieved April 17, 2009.
- Berea College v. Kentucky, 211 U.S. 45 (1908)
- Oliver Wendell Holmes, Jr. "274 U.S. 200: Buck v. Bell". Cornell University Law School Legal Information Institute. Archived from the original on June 12, 2013. Retrieved June 12, 2013.
- Brown v. Board of Education, 347 U.S. 483 (1954)
- Patterson, James (2002). Brown v. Board of Education: A Civil Rights Milestone and Its Troubled Legacy (Pivotal Moments in American History). Oxford University Press. ISBN 0-19-515632-3.
- "Forced Busing and White Flight". Time. September 25, 1978. Retrieved June 17, 2009.
- Hernandez v. Texas, 347 U.S. 475 (1954)
- United States v. Virginia, 518 U.S. 515 (1996)
- Levy v. Louisiana, 361 U.S. 68 (1968)
- Gerstmann, Evan (1999). The Constitutional Underclass: Gays, Lesbians, and the Failure of Class-Based Equal Protection. University Of Chicago Press. ISBN 0-226-28860-9.
- Reed v. Reed, 404 U.S. 71 (1971)
- "Reed v. Reed 1971". Supreme Court Drama: Cases that Changed America. – via HighBeam Research (subscription required). January 1, 2001. Retrieved June 12, 2013.
- Craig v. Boren, 429 U.S. 190 (1976)
- Kenneth L. Karst (January 1, 2000). "Craig v. Boren 429 U.S. 190 (1976)". Encyclopedia of the American Constitution. – via HighBeam Research (subscription required). Retrieved June 16, 2013.
- Wesberry v. Sanders, 376 U.S. 1 (1964).
- Reynolds v. Sims, 377 U.S. 533 (1964).
- Epstein, Lee; Walker, Thomas G. (2007). Constitutional Law for a Changing America: Rights, Liberties, and Justice (6th ed.). Washington, D.C.: CQ Press. p. 775. ISBN 0-87187-613-2. "Wesberry and Reynolds made it clear that the Constitution demanded population-based representational units for the U.S. House of Representatives and both houses of state legislatures...."
- Shaw v. Reno, 509 U.S. 630 (1993)
- Aleinikoff, T. Alexander; Samuel Issacharoff (1993). "Race and Redistricting: Drawing Constitutional Lines after Shaw v. Reno". Michigan Law Review (Michigan Law Review, Vol. 92, No. 3) 92 (3): 588–651. doi:10.2307/1289796. JSTOR 1289796.
- Bush v. Gore, 531 U.S. 98 (2000)
- "Bush v. Gore". Encyclopaedia Britannica. Retrieved June 12, 2013.
- League of United Latin American Citizens v. Perry, 548 U.S. 399 (2006)
- Gilda R. Daniels (March 22, 2012). "Fred Gray: life, legacy, lessons". Faulkner Law Review. – via HighBeam Research (subscription required). Retrieved June 12, 2013.
- Walter Friedman (January 1, 2006). "Fourteenth Amendment". Encyclopedia of African-American Culture and History. – via HighBeam Research (subscription required). Retrieved June 12, 2013.
- Chin, Gabriel J. (2004). "Reconstruction, Felon Disenfranchisement, and the Right to Vote: Did the Fifteenth Amendment Repeal Section 2 of the Fourteenth?". Georgetown Law Journal 92: 259.
- Richardson v. Ramirez, 418 U.S. 24 (1974)
- "Sections 3 and 4: Disqualification and Public Debt". Caselaw.lp.findlaw.com. June 5, 1933. Retrieved August 1, 2010.
- "Pieces of History: General Robert E. Lee's Parole and Citizenship". Prologue Magazine (The National Archives) 37 (1). 2005.
- Goodman, Bonnie K. (2006). "History Buzz: October 16, 2006: This Week in History". History News Network. Retrieved June 18, 2009.
- "Chapter 157: The Oath As Related To Qualifications", Cannon's Precedents of the U.S. House of Representatives 6, January 1, 1936
- For more on Section 4 go to Findlaw.com
- "294 U.S. 330 at 354". Findlaw.com. Retrieved August 1, 2010.
- Liptak, Adam (July 24, 2011). "The 14th Amendment, the Debt Ceiling and a Way Out". The New York Times. Retrieved July 30, 2011. "In recent weeks, law professors have been trying to puzzle out the meaning and relevance of the provision. Some have joined Mr. Clinton in saying it allows Mr. Obama to ignore the debt ceiling. Others say it applies only to Congress and only to outright default on existing debts. Still others say the president may do what he wants in an emergency, with or without the authority of the 14th Amendment."
- "Our National Debt 'Shall Not Be Questioned,' the Constitution Says". The Atlantic. May 4, 2011.
- Sahadi, Jeanne. "Is the debt ceiling unconstitutional?". CNN Money. Retrieved January 2, 2013.
- Rosen, Jeffrey. "How Would the Supreme Court Rule on Obama Raising the Debt Ceiling Himself?". The New Republic. Retrieved July 29, 2011.
- Chemerinsky, Erwin (July 29, 2011). "The Constitution, Obama and raising the debt ceiling". Los Angeles Times. Retrieved July 30, 2011.
- "FindLaw: U.S. Constitution: Fourteenth Amendment, p. 40". Caselaw.lp.findlaw.com. Retrieved August 1, 2010.
- Katzenbach v. Morgan, 384 U.S. 641 (1966)
- Theodore Eisenberg (January 1, 2000). "Katzenbach v. Morgan 384 U.S. 641 (1966)". Encyclopedia of the American Constitution. – via HighBeam Research (subscription required). Retrieved June 12, 2013.
- City of Boerne v. Flores, 521 U.S. 507 (1997)
- Steven A. Engel (October 1, 1999). "The McCulloch theory of the Fourteenth Amendment: City of Boerne v. Flores and the original understanding of section 5". Yale Law Journal. – via HighBeam Research (subscription required). Retrieved June 12, 2013.
- "The Civil War And Reconstruction". Retrieved October 21, 2010.
- "Library of Congress, Thirty-Ninth Congress Session II". Retrieved May 11, 2013.
- Mount, Steve (January 2007). "Ratification of Constitutional Amendments". Retrieved February 24, 2007.
- Documentary History of the Constitution of the United States, Vol. 5. Department of State. pp. 533–543. ISBN 0-8377-2045-1.
- A Century of Lawmaking for a New Nation: U.S. Congressional Documents and Debates, 1774-1875. Library of Congress. p. 707.
- Chin, Gabriel J.; Abraham, Anjali (2008). "Beyond the Supermajority: Post-Adoption Ratification of the Equality Amendments". Arizona Law Review 50: 25.
- P.L. 2003, Joint Resolution No. 2; 4/23/03
- Goldstone, Lawrence (2011). Inherently Unequal: The Betrayal of Equal Rights by the Supreme Court, 1865-1903. Walker & Company. ISBN 978-0-8027-1792-4.
- Halbrook, Stephen P. (1998). Freedmen, the 14th Amendment, and the Right to Bear Arms, 1866-1876. Greenwood Publishing Group. ISBN 9780275963316. Retrieved March 29, 2013. at Questia
- Nelson, William E. The Fourteenth Amendment: from political principle to judicial doctrine (Harvard University Press, 1988) online edition
- Bogen, David S. (April 30, 2003). Privileges and Immunities: A Reference Guide to the United States Constitution. Greenwood Publishing Group. ISBN 9780313313479. Retrieved March 19, 2013.
- "Amendments to the Constitution of the United States" (PDF). GPO Access. Retrieved September 11, 2005. (PDF, providing text of amendment and dates of ratification)
- CRS Annotated Constitution: Fourteenth Amendment
- Fourteenth Amendment and related resources at the Library of Congress
- National Archives: Fourteenth Amendment | http://en.wikipedia.org/wiki/Fourteenth_Amendment_to_the_United_States_Constitution | 13 |
53 | The Convention on Biological Diversity is probably the most all-encompassing international agreement ever adopted. It seeks to conserve the diversity of life on Earth at all levels - genetic, population, species, habitat, and ecosystem - and to ensure that this diversity continues to maintain the life support systems of the biosphere overall. It recognizes that setting social and economic goals for the use of biological resources and the benefits derived from genetic resources is central to the process of sustainable development, and that this in turn will support conservation.
Achieving the goals of the Convention will require progress on many fronts. Existing knowledge must be used more effectively; a deeper understanding of human ecology and environmental effects must be gained and communicated to those who can stimulate and shape policy change; environmentally more benign practices and technologies must be applied; and unprecedented technical and financial cooperation at international level is needed.
International environmental agreements
Throughout history human societies have established rules and customs to keep the use of natural resources within limits in order to avoid long-term damage to the resource. Aspects of biodiversity management have been on the international agenda for many years, although early international environmental treaties were primarily concerned with controlling the excess exploitation of particular species.
The origins of modern attempts to manage global biological diversity can be traced to the United Nations Conference on Human Environment held in Stockholm in 1972, which explicitly identified biodiversity conservation as a priority. The Action Plan in Programme Development and Priorities adopted in 1973 at the first session of the Governing Council of UNEP identified the “conservation of nature, wildlife and genetic resources” as a priority area. The international importance of conservation was confirmed by the adoption, in the same decade, of the Convention on Wetlands (1971), the World Heritage Convention (1972), the Convention on International Trade in Endangered Species (1973), and the Convention on Migratory Species (1979) as well as various regional conventions.
Making the connections
By the 1980s, however, it was becoming apparent that traditional conservation alone would not arrest the decline of biological diversity, and new approaches would be needed to address collective failure to manage the human environment and to achieve equitable human development. Important declarations throughout the 1980s, such as the World Conservation Strategy (1980) and the resolution of the General Assembly of the United Nations on the World Charter for Nature (1982), stressed the new challenges facing the global community. In 1983 the General Assembly of the United Nations approved the establishment of a special independent commission to report on environment and development issues, including proposed strategies for sustainable development. The 1987 report of this World Commission on Environment and Development, entitled Our Common Future
(also known as the `Brundtland Report
'), argued that “the challenge of finding sustainable development paths ought to provide the impetus - indeed the imperative - for a renewed search for multilateral solutions and a restructured system of cooperation. These challenges cut across the divides of national sovereignty, of limited strategies for economic gain, and of separated disciplines of science”.
A growing consensus was emerging among scientists, policy-makers and the public, that the biosphere had to be seen as a single system, and that its conservation required multilateral action, since global environmental problems cannot by definition be addressed in isolation by individual States, or even by regional groupings.
By the end of the 1980s, international negotiations were underway that would lead to the United Nations Conference on Environment and Development (the `Earth Summit', or UNCED), held in Rio de Janeiro in June 1992. At this pivotal meeting, Agenda 21 (the `Programme of Action for Sustainable Development'), the Rio Declaration on Environment and Development, and the Statement of Forest Principles, were adopted, and both the United Nations Framework Convention on Climate Change and the Convention on Biological Diversity were opened for signature.
Financial resources for global environmental protection
During the same period there was an increasing interest in international mechanisms for environmental funding. With the debt crisis, commercial flows for development had become scarce, and the role of multilateral assistance had assumed greater importance in discussions on financial flows and debt rescheduling. Simultaneously, concern with new funding for environmental issues was growing - the Brundtland Report argued for a significant increase in financial support from international sources; the 1987 Montreal Protocol on Substances that Deplete the Ozone Layer established a financial mechanism to provide financial and technical assistance to eligible Parties for the phasing out of chlorofluorocarbons (CFCs); and the concept of debt-for-nature swaps, that would promote `win-win' situations allowing developing countries to ease their debt burdens and finance environmental protection, was being examined.
A number of proposals for funds and mechanisms were made. Donor country readiness to increase the supply of funds was low and their willingness to support new international agencies even lower, but nevertheless recognition of the principle that additional environment-related funding would have to be provided to developing countries was emerging. During 1989 and 1990 discussions took place within the framework of the World Bank's Development Committee on a new funding mechanism for the environment. At the end of 1990 agreement was reached on the establishment of the Global Environment Facility under a tripartite agreement between the World Bank, UNDP and UNEP. The GEF would be a pilot initiative for a three-year period (1991-1994) to promote international cooperation and to foster action to protect the global environment. The grants and concessional funds disbursed would complement traditional development assistance by covering the additional costs (also known as `agreed incremental costs') incurred when a national, regional or global development project also targets global environmental objectives.
The GEF was given four focal areas, one of which was to be biological diversity.1
One of the first initiatives taken under the pilot phase was to support preparation of Biodiversity Country Studies in twenty-four developing countries and countries in transition. The primary objective of the Biodiversity Country Studies was to gather and analyse the data required to drive forward the process of developing national strategies, plans, or programmes for the conservation and sustainable use of biological diversity and to integrate these activities with other relevant sectoral or cross-sectoral plans, programs, or policies. This anticipated the provisions of key articles of the Convention on Biological Diversity, in particular the requirements in Article 6 for each country to have a national biodiversity strategy and to integrate the conservation and sustainable use of biodiversity into all sectors of national planning and in Article 7 to identify components of biological diversity important for its conservation and sustainable use.
The negotiation of the Convention on Biological Diversity
The World Conservation Union (IUCN) had been exploring the possibilities for a treaty on the conservation of natural resources, and between 1984 and 1989 had prepared successive drafts of articles for inclusion in a treaty. The IUCN draft articles concentrated on the global action needed to conserve biodiversity at the genetic, species and ecosystem levels, and focused on in-situ
conservation within and outside protected areas. It also included the provision of a funding mechanism to share the conservation burden between the North and the South.
In 1987 the Governing Council of UNEP established an Ad Hoc Working Group of Experts on Biological Diversity
to investigate “the desirability and possible form of an umbrella convention to rationalise current activities in this field, and to address other areas which might fall under such a convention”.
The Group of Experts concluded that while existing global and regional conventions addressed different aspects of biological diversity, the specific focus and mandates of these conventions did not constitute a regime that could ensure global conservation of biological diversity. On the other hand, it also concluded that the development of an umbrella agreement to absorb or consolidate existing conventions was legally and technically impossible. By 1990 the Group had reached a consensus on the need for a new global treaty on biological diversity, in the form of a framework treaty building on existing conventions.
The scope of such a convention was broadened to include all aspects of biological diversity, including in-situ
conservation of wild and domesticated species, sustainable use of biological resources, access to genetic resources and to relevant technology, including biotechnology, access to benefits derived from such technology, safety of activities related to living modified organisms, and provision of new and additional financial support.
In February 1991 the Group of Experts became the Intergovernmental Negotiating Committee for a Convention on Biological Diversity
(INC). The INC held seven negotiating sessions, aiming to have the Convention adopted in time for it to be signed by States at the Earth Summit in June 1992.
The relationship between the objectives of the Convention and issues relating to trade, to agriculture and to the emerging biotechnology sector were key issues in the minds of the negotiators. Part of the novelty of the Convention on Biological Diversity lies in the recognition that, to meet its objectives, the Convention would need to make sure that these objectives were acknowledged and taken account of by other key legal regimes. These included the trade regime that would enter into force in 1994 under the World Trade Organization; the FAO Global System on Plant Genetic Resources, in particular the International Undertaking on Plant Genetic Resources adopted in 1983; and the United Nations Convention on the Law of the Sea which was concluded in 1982 and would enter into force in 1994.
Those involved in negotiating the Convention on Biological Diversity, as well as those involved in the parallel negotiations on the United Nations Framework Convention on Climate Change, were consciously developing a new generation of environmental conventions. These conventions recognized that the problems they sought to remedy arose from the collective impacts of the activities of many major economic sectors and from trends in global production and consumption. They also recognized that, to be effective, they would need to make sure that the biodiversity and climate change objectives were taken into account in national policies and planning in all sectors, national legislation and relevant international legal regimes, the operations of relevant economic sectors, and by citizens of all countries through enhanced understanding and behavioural changes.
The text of the Convention was adopted in Nairobi on 22 May 1992, and between 5 and 14 June 1992 the Convention was signed in Rio de Janeiro by the unprecedented number of 156 States and one regional economic integration organization (the European Community). The early entry into force of the Convention only 18 months later, on 29 December 1993, was equally unprecedented, and by August 2001 the Convention had 181 Contracting Parties (Annex 2 and Map 18).
THE OBJECTIVES AND APPROACH OF THE CONVENTION
Objectives of the Convention
- Conservation of biological diversity
- Sustainable use of components of biological diversity
- Fair and equitable sharing of the benefits arising out of the use of genetic resources
The objectives of the Convention on Biological Diversity are “the conservation of biological diversity, the sustainable use of its components, and the fair and equitable sharing of the benefits arising out of the utilisation of genetic resources” (Article 1). These are translated into binding commitments in its normative provisions, contained in Articles 6 to 20.
A central purpose of the Convention on Biological Diversity, as with Agenda 21 and the Convention on Climate Change, is to promote sustainable development, and the underlying principles of the Convention are consistent with those of the other `Rio Agreements'. The Convention stresses that the conservation of biological diversity is a common concern of humankind, but recognizes that nations have sovereign rights over their own biological resources, and will need to address the overriding priorities of economic and social development and the eradication of poverty.
The Convention recognizes that the causes of the loss of biodiversity are diffuse in nature, and mostly arise as a secondary consequence of activities in economic sectors such as agriculture, forestry, fisheries, water supply, transportation, urban development, or energy, particularly activities that focus on deriving short-term benefits rather than long-term sustainability. Dealing with economic and institutional factors is therefore key to achieving the objectives of the Convention. Management objectives for biodiversity must incorporate the needs and concerns of the many stakeholders involved, from local communities upward.
A major innovation of the Convention is its recognition that all types of knowledge systems are relevant to its objectives. For the first time in an international legal instrument, the Convention recognises the importance of traditional knowledge - the wealth of knowledge, innovations and practices of indigenous and local communities that are relevant for the conservation and sustainable use of biological diversity. It calls for the wider application of such knowledge, with the approval and involvement of the holders, and establishes a framework to ensure that the holders share in any benefits that arise from the use of such traditional knowledge.
The Convention therefore places less emphasis on a traditional regulatory approach. Its provisions are expressed as overall goals and policies, with specific action for implementation to be developed in accordance with the circumstances and capabilities of each Party, rather than as hard and precise obligations. The Convention does not set any concrete targets, there are no lists, no annexes relating to sites or protected species, thus the responsibility of determining how most of its provisions are to be implemented at the national level falls to the individual Parties themselves.
INSTITUTIONAL STRUCTURE OF THE CONVENTION
The Convention establishes the standard institutional elements of a modern environmental treaty: a governing body, the Conference of the Parties; a Secretariat; a scientific advisory body; a clearing-house mechanism and a financial mechanism. Collectively, these translate the general commitments of the Convention into binding norms or guidelines, and assist Parties with implementation. The rôle of the institutions are summarised here and discussed in more detail in chapter 3.
Because the Convention is more than a framework treaty, many of its provisions require further collective elaboration in order to provide a clear set of norms to guide States and stakeholders in their management of biodiversity. Development of this normative basis centres around decisions of the Conference of the Parties (COP)
, as the governing body of the Convention process. The principal function of the COP is to regularly review implementation of the Convention and to steer its development, including establishing such subsidiary bodies as may be required. The COP meets on a regular basis and held five meetings in the period 1994 to 2000. At its fifth meeting (2000) the COP decided that it would henceforth meet every two years.
The Subsidiary Body on Scientific, Technical and Technological Advice (SBSTTA)
is the principal subsidiary body of the COP. Its mandate is to provide assessments of the status of biological diversity, assessments of the types of measures taken in accordance with the provisions of the Convention, and advice on any questions that the COP may put to it. SBSTTA met five times in the period 1995 to 2000 and, in the future, will meet twice in each two-year period between meetings of the COP.
The principal functions of the Secretariat
are to prepare for and service meetings of the COP and other subsidiary bodies of the Convention, and to coordinate with other relevant international bodies. The Secretariat is provided by UNEP and is located in Montreal, Canada.
The Convention provides for the establishment of a clearing-house mechanism
to promote and facilitate technical and scientific cooperation (Article 18). A pilot phase of the clearing-house mechanism took place from 1996 to 1998 and, following evaluation of this, the COP has approved a clearing-house mechanism strategic plan and a programme of work until 2004.
The Convention establishes a financial mechanism
for the provision of resources to developing countries for the purposes of the Convention. The financial mechanism is operated by the Global Environment Facility (GEF) and functions under the authority and guidance of, and is accountable to, the COP. GEF activities are implemented by the United Nations Development Programme (UNDP), UNEP and the World Bank. Under the provisions of the Convention, developed country Parties undertake to provide `new and additional financial resources to enable developing country Parties to meet the agreed full incremental cost of implementing the obligations of the Convention' (Article 20) and, in addition to the provision of resources through the GEF, these Parties may also provide financial resources through bilateral and multilateral channels.
The COP is able, if it deems it necessary, to establish inter-sessional bodies and meetings
to carry out work and provide advice between ordinary meetings of the COP. Those open-ended meetings that have been constituted so far include:
- Open ended Ad Hoc Working Group on Biosafety (met six times from 1996-1999 - see below)
- Workshop on Traditional Knowledge and Biological Diversity (met in 1997)
- Intersessional Meeting on the Operations of the Convention (ISOC) (met in 1999)
- Ad Hoc Working Group on Article 8(j) and Related Provisions (met in 2000, will meet again in 2002)
- Ad Hoc Open-ended Working Group on Access and Benefit Sharing (will meet in 2001)
- Meeting on the Strategic Plan, National Reports and Implementation of the Convention (MSP) (will meet in 2001)
Figure 2.1 Institutions of the Convention
Cartagena Protocol on Biosafety
The Convention requires the Parties to “consider the need for and modalities of a protocol setting out appropriate procedures, including, in particular, advance informed agreement, in the field of the safe transfer, handling and use of any living modified organism resulting from biotechnology that may have adverse effect on the conservation and sustainable use of biological diversity” (Article 19(3)).
At its second meeting, the COP established a negotiating process and an Ad Hoc Working Group on Biosafety that met six times between 1996 and 1999 to develop a draft protocol. The draft submitted by the Working Group was considered by an Extraordinary Meeting of the COP held in Cartagena, Colombia in February 1999 and in Montreal, Canada in January 2000, and on 29 January 2000 the text of the Cartagena Protocol on Biosafety to the Convention on Biological Diversity
was adopted. The Protocol was opened for signature during the fifth meeting of the COP in May 2000 where it was signed by 68 States. The number of signatures had risen to 103 by 1 August 2001, and five States had ratified the Protocol. It will enter into force after the fiftieth ratification.
The COP will serve as the meeting of the Parties to the Protocol. The meetings will however be distinct, and only Parties to the Convention who are also Parties to the Protocol may take decisions under the Protocol (States that are not a Party to the Convention cannot become Party to the Protocol). Pending the entry into force of the Protocol, an Intergovernmental Committee for the Cartagena Protocol (ICCP)
has been established to undertake the preparations necessary for the first meeting of the Parties. The first meeting of the Intergovernmental Committee was held in Montpellier, France in December 2000 and the second in Nairobi, Kenya in September-October 2001.
THE DECISION-MAKING PROCESS
The activities of the COP have been organized through programmes of work that identify the priorities for future periods. The first medium-term programme of work (1995 to 1997) saw a focus on developing the procedures and modus operandi
of the institutions, determining priorities, supporting national biodiversity strategies, and developing guidance to the financial mechanism. At its fourth meeting, the COP adopted a programme of work for its fifth, sixth and seventh meetings (1999-2004), and, at its fifth meeting, approved a longer-term programme of work for SBSTTA, and began the development of a strategic plan for the Convention.
The following are the key steps in the decision-making process.
The programme of work establishes a timetable indicating when the COP will consider in detail biological themes or ecosystems, or specific provisions of the Convention contained in the operative Articles. In addition to such ecosystem based programmes, the COP has addressed a number of key substantive issues in a broadly comprehensive manner. Such issues are collectively known as `cross-cutting issues', and these have an important rôle to play in bringing cohesion to the work of the Convention by linking the thematic programmes.
Submissions and Compilation of Information
The procedures by which the COP comes to adopt its decisions are broadly similar in each case. Firstly, current activities are reviewed to identify synergies and gaps within the existing institutional framework, or an overview of the state of knowledge on the issue under examination is developed. At the same time, Parties, international organizations, specialist scientific and non-governmental organizations are invited to provide information, such as reports or case studies. This review mechanism is coordinated by the Secretariat, supported in some cases by informal inter-agency task forces or liaison groups of experts.
Preparation of synthesis
Current ecosystem themes
Current cross-cutting issues
- Marine and coastal biological diversity
- Forest biological diversity
- Biological diversity of inland water ecosystems
- Agricultural biological diversity
- Biological diversity of dry and sub-humid lands
- Mountain ecosystems (to be considered at COP-7 in 2004)
- Identification, monitoring and assessment of biological diversity, and development of indicators
- Access to genetic resources
- Knowledge, innovations and practices of indigenous and local communities
- Sharing the benefit sharing arising from the utilisation of genetic resources
- Intellectual property rights
- The need to address a general lack of taxonomic capacity worldwide
- Alien species that threaten ecosystems, habitats or species
- Sustainable use, including tourism
- Protected areas (to be considered at COP-7 in 2004)
- Transfer of technology and technology cooperation (to be considered at COP-7 in 2004).
The Secretariat then prepares a preliminary synthesis of these submissions for consideration by SBSTTA. Where appropriate the Secretariat may use a liaison group to assist with this. In other cases SBSTTA may have established an ad hoc
technical expert group, with members drawn from rosters of experts nominated by Parties, to assist with the preparation of the synthesis. Where appropriate, the Secretariat may also identify relevant networks of experts and institutions, and coordinate their input to the preparation of the synthesis.
Scientific, Technical or Technological Advice
On the basis of the work of the Secretariat, of any ad hoc
technical expert group, and the findings of specialist meetings such as the Global Biodiversity Forum, SBSTTA will assess the status and trends of the biodiversity of the ecosystem in question or the relationship of the cross-cutting issue to the implementation of the Convention and develop its recommendation to the COP accordingly.
Supplementary Preparations for the COP
The advice of SBSTTA may be complemented by the work of the Secretariat in the inter-sessional period between the meeting of the SBSTTA and that of the COP. Such work may comprise issues not within the mandate of the SBSTTA, such as financial and legal matters, development of guidance to the financial mechanism, or relations with other institutions and processes that could contribute to implementation of the future decision of the COP.
The COP considers the recommendations of the SBSTTA and any other advice put before it. It will then advise Parties on the steps they should take to address the issue, in light of their obligations under the Convention. It may also establish a process or programme to develop the issue further. Such a programme would establish goals and identify the expected outcomes, including a timetable for these and the means to achieve them. The types of output to be developed could include: guidelines, codes of conduct, manuals of best practice, guidance for the institutions of the Convention, criteria, and so forth. The programme would proceed to develop these products, under the guidance of SBSTTA, and report results to the COP for review.
OBLIGATIONS ON PARTIES TO THE CONVENTION
The Convention constitutes a framework for action that will take place mainly at the national level. It places few precise binding obligations upon Parties, but rather provides goals and guidelines, and these are further elaborated by decisions of the COP. Most of the commitments of Parties under the Convention are qualified, and their implementation will depend upon the particular national circumstances and priorities of individual Parties, and the resources available to them. Nevertheless, Parties are obliged to address the issues covered by the Convention, the chief of which are outlined in the following sections.
Article 6: National strategies and plans
National biodiversity strategies and action plans
For most Parties, developing a national biodiversity strategy
- establishing the institutional framework for developing the strategy, including designating leadership and ensuring a participative approach
- allocating or obtaining financial resources for the strategy process
- assessing the status of biological diversity within its jurisdiction
- articulating and debating the vision and goals for the strategy through a national dialogue with relevant stakeholders
- comparing the actual situation to the objectives and targets
- formulating options for action that cover key issues identified
- establishing criteria and priorities to help choose from among options
- matching actions and objectives
Developing and implementing national biodiversity action plans
- assigning roles and responsibilities
- agreeing the tools and approaches to be used
- establishing timeframes and deadlines for completion of tasks
- obtaining the budget
- agreeing indicators and measurable targets against which progress can be assessed
- determining reporting responsibilities, intervals and formats
- establishing procedures for incorporating lessons learned into the revision and updating of the strategy
The implementation of the Convention requires the mobilisation of both information and resources at the national level. As a first step, the Convention requires Parties to develop national strategies, plans or programmes for the conservation and sustainable use of biodiversity, or to adapt existing plans or programmes for this purpose (Article 6(a)). This may require a new planning process, or a review of existing environmental management or other national plans.
The Convention also requires Parties to integrate conservation and sustainable use of biodiversity into relevant sectoral or cross-sectoral plans, programmes and policies, as well as into national decision-making (Article 6(b)). This is clearly a more complex undertaking, requiring an assessment of the impacts of other sectors on biodiversity management. It will also require coordination among government departments or agencies. A national biodiversity planning process can identify the impacts and opportunities for integration.
Given the importance of stakeholder involvement in the implementation of the Convention, national planning processes should provide plenty of scope for public consultation and participation. The COP has recommended the guidance for the development of national strategies found in: Guidelines for Preparation of Biodiversity Country Studies
(UNEP) and National Biodiversity Planning: Guidelines Based on Early Country Experiences
(World Resources Institute, UNEP and IUCN). The financial mechanism has supported 125 countries in the preparation of their national biodiversity strategies and action plans (see chapter 3).
Article 7: Identification and monitoring of biodiversity
In contrast to some previous international or regional agreements on conservation, the Convention does not contain an internationally agreed list of species or habitats subject to special measures of protection. This is in line with the country-focused approach of the Convention. Instead, the Convention requires Parties to identify for themselves components of biodiversity important for conservation and sustainable use (Article 7).
Information provides the key for the implementation of the Convention, and Parties will require a minimum set of information in order to be able to identify national priorities. Whilst it contains no lists, the Convention does indicate, in Annex I, the types of species and ecosystems that Parties might consider for particular attention (see Box). Work is also underway within the Convention to elaborate Annex I in order to assist Parties further.
Indicative categories to guide Parties in the identification and monitoring of biodiversity
Ecosystems and habitats
- with high diversity, large numbers of endemic or threatened species, or wilderness;
- required by migratory species
- of social, economic, cultural or scientific importance
- representative, unique or associated with key evolutionary or other biological processes
Species and communities
- wild relatives of domesticated or cultivated species
- of medicinal, agricultural or other economic value
- of social, scientific or cultural importance
- of importance for research into the conservation and sustainable use of biological diversity, such as indicator species
Described genomes or genes of social, scientific or economic importance
Parties are also required to monitor important components of biodiversity, and to identify processes or activities likely to have adverse effects on biodiversity. The development of indicators may assist Parties in monitoring the status of biological diversity and the effects of measures taken for its conservation and sustainable use.
Article 8: Conservation of biodiversity in-situ
The Convention addresses both in-situ
conservation, but the emphasis is on in-situ
measures, i.e. within ecosystems and natural habitats or, in the case of domesticated or cultivated species, in the surroundings where they have developed their distinctive properties. Article 8 sets out a comprehensive framework for in-situ
conservation and a Party's national biodiversity planning process should include consideration of the extent to which it currently addresses the following issues.
Parties should establish a system of protected areas or areas where special measures are required to conserve biological diversity, covering both marine and terrestrial areas. They are expected to develop guidelines for the selection, establishment and management of these areas, and to enhance the protection of such areas by the environmentally sound and sustainable development of adjacent areas.
Regulation and management of biological resources
Parties should regulate or manage important components of biological diversity whether found within protected areas or outside them. Legislation or other regulatory measures should therefore be introduced or maintained to promote the protection of ecosystems, natural and semi-natural habitats and the maintenance of viable populations of species in natural surroundings.
Regulation and management of activities
Under Article 7 Parties should attempt to identify activities that may be detrimental to biological diversity. Where such activities have been identified, Parties should take steps to manage them so as to reduce their impacts.
Rehabilitation and restoration
Parties should develop plans and management strategies for the rehabilitation and restoration of degraded ecosystems and the recovery of threatened species.
Parties should prevent the introduction of, and control or eradicate alien species which threaten ecosystems, habitats, or native species.
Living modified organisms
Parties should establish or maintain means to manage the risks associated with the use and release of living modified organisms (LMOs) resulting from biotechnology. Parties are thus required to take action at the national level to ensure that LMOs do not cause adverse effects to biodiversity.
Traditional knowledge and practices
The Convention recognizes that indigenous and local communities embodying traditional lifestyles have a crucial rôle to play in the conservation and sustainable use of biodiversity. It calls on Parties to respect, preserve and maintain the knowledge, innovations and practices of indigenous and local communities and to encourage their customary uses of biological resources compatible with the conservation and sustainable use of these resources. By this, the Convention acknowledges the significance of traditional knowledge and practices, which should be taken into account in the implementation of all aspects of the Convention.
Article 9: Conservation of biodiversity ex-situ
While prioritising in-situ
conservation, the Convention recognizes the contribution that ex-situ
measures and facilities, such as gene banks, botanic gardens and zoos, can make to the conservation and sustainable use of biological diversity. It specifies that, where possible, facilities for ex-situ
conservation should be established and maintained in the country of origin of the genetic resources concerned.
The Convention does not, however, apply its provisions on access and benefit-sharing to ex-situ
resources collected prior to the entry into force of the Convention. This is of particular concern to developing countries, from which natural resources have already been removed and stored in ex-situ
collections, without a mechanism to ensure the sharing of benefits. The issue of the status of ex-situ
resources is currently being reviewed within the context of the work of the Food and Agriculture Organization of the United Nations.
Article 10: Sustainable use
Although the term conservation has sometimes been taken to incorporate sustainable use of resources, in the Convention the two terms appear side by side, and a specific Article of the Convention is devoted to sustainable use. This reflects the view of many countries during the negotiation of the Convention that the importance of sustainable use of resources be accorded explicit recognition. Sustainable use is defined in the Convention as:
“the use of components of biological diversity in a way and at a rate that does not lead to the long-term decline of biological diversity, thereby maintaining its potential to meet the needs and aspirations of present and future generations”.
The practical implications of this definition in terms of management are difficult to assess. Article 10 does not suggest quantitative methods for establishing the sustainability of use, but sets out five general areas of activity: the need to integrate conservation and sustainable use into national decision-making; to avoid or minimize adverse impacts on biological diversity; to protect and encourage customary uses of biodiversity in accordance with traditional cultural practices; to support local populations to develop and implement remedial action in degraded areas; and to encourage cooperation between its governmental authorities and its private sector in developing methods for sustainable use of biological resources.
Articles 11-14: Measures to promote conservation and sustainable use
The Convention makes explicit reference to a number of additional policy and procedural measures to promote conservation and sustainable use. For example, it requires Parties to adopt economically and socially sound incentives for this purpose (Article 11). It also recognizes the importance of public education and awareness to the effective implementation of the Convention (Article 13). Parties are therefore required to promote understanding of the importance of biodiversity conservation, and of the measures needed.
Research and training are critical to the implementation of almost every substantive obligation. Some deficit in human capacity exists in all countries, particularly so in developing countries. The Convention requires Parties to establish relevant scientific and technical training programmes, to promote research contributing to conservation and sustainable use, and to cooperate in using research results to develop and apply methods to achieve these goals (Article 12). Special attention must be given to supporting the research and training needs of developing countries, and this is explicitly linked to the provisions on access to and transfer of technology, technical and scientific cooperation and financial resources.
Parties are required to introduce appropriate environment impact assessment (EIA) procedures for projects likely to have significant adverse effects on biodiversity (Article 14). Legislation on EIA will generally incorporate a number of elements, including a threshold for determining when an EIA will be required, procedural requirements for carrying it out, and the requirement that the assessment be taken into account when determining whether the project should proceed. In addition, Parties are required to consult with other States on activities under their jurisdiction and control that may adversely affect the biodiversity of other States, or areas beyond national jurisdiction.
Articles 15-21: Benefits
The Convention provides for scientific and technical cooperation to support the conservation and sustainable use of biological diversity, and a clearing-house mechanism is being developed to promote and facilitate this cooperation. The provisions on scientific and technical cooperation provide a basis for capacity-building activities. For example, the COP has requested the financial mechanism to support a Global Taxonomy Initiative
designed, among other things, to develop national, regional and sub-regional training programmes, and to strengthen reference collections in countries of origin. In addition to general provisions on cooperation, research and training, the Convention includes articles promoting access to the potential benefits resulting from the use of genetic resources, access to and transfer of relevant technology, and access to increased financial resources.
The potential benefits for developing country Parties under the Convention arise from the new position on conservation negotiated between developed and developing countries. The extent to which these benefits materialise is likely to be crucial to determining the long-term success of the Convention. Global biodiversity increases toward the tropics, and the Convention gives developing countries, in this zone and elsewhere, an opportunity to derive financial and technical benefits from their biological resources, while the world overall benefits from the goods and services that the biodiversity thus conserved will continue to provide.
Access to genetic resources and benefit-sharing
Before the negotiation of the Convention, genetic resources were considered to be freely available, despite their potential monetary value. However, the approach taken in the Convention is radically different. Article 15 reaffirms the sovereignty of Parties over their genetic resources, and recognizes the authority of States to determine access to those resources. While the Convention addresses sovereignty over resources, it does not address their ownership
, which remains to be determined at national level in accordance with national legislation or practice.
Although the sovereign rights of States over their genetic resources is emphasised, access to genetic resources for environmentally sound uses by scientific and commercial institutions under the jurisdiction of other Parties is to be facilitated. Since genetic resources are no longer regarded as freely available, the Convention paves the way for new types of regimes governing the relationship between providers and users of genetic resources.
Key elements in genetic resource use agreements
- the need to obtain the prior informed consent of the country of origin before obtaining access to resources
- the need for mutually agreed terms of access with the country of origin (and potentially with direct providers of genetic resources such as individual holders or local communities)
- the importance of benefit-sharing; the obligation to share, in a fair and equitable way, benefits arising from the use of genetic resources with the Party that provides those resources
It is generally agreed that benefit-sharing should extend not only to the government of the country of origin but also to indigenous and local communities directly responsible for the conservation and sustainable use of the genetic resources in question. National legislation might require bio-prospectors to agree terms with such communities for the use of resources, and this may be all the more crucial where bio-prospectors are seeking to draw upon not only the resources themselves, but also upon the knowledge of these communities about those resources and their potential use.
Access to and transfer of technologies
Under Article 16 of the Convention, Parties agree to share technologies relevant to the conservation of biological diversity and the sustainable use of its components, and also technologies that make use of genetic resources. Technology transfer under the Convention therefore incorporates both `traditional' technologies and biotechnology. Biotechnology is defined in the Convention as: any technological application that uses biological systems, living organisms, or derivatives thereof, to make or modify products or processes for specific use.
Technologies which make use of genetic resources are subject to special provisions aimed at allowing the country of origin of the resources to share in the benefits arising out of the development of these technologies. The Convention makes it a specific requirement that all Parties create a legislative, administrative or policy framework with the aim that such technologies are transferred, on mutually agreed terms, to those providing the genetic resources. This obligation extends to technology protected by patents and other intellectual property rights.
More generally, developing country Parties are to have access to technology under terms which are fair and most favourable, including on concessional and preferential terms, where mutually agreed. Article 16 provides that where relevant technology is subject to an intellectual property right such as a patent, the transfer must be on terms which recognize and are consistent with the adequate and effective protection of the property right. However, it also goes on to provide that Parties are to cooperate in ensuring that intellectual property rights are supportive of, and do not run counter to, the objectives of the Convention.
All Parties undertake to provide financial support and incentives for implementation of the Convention at the national level, in accordance with their capabilities. In addition, developed country Parties agree to make available to developing country Parties, new and additional financial resources to meet “the agreed full incremental costs” of implementing measures to fulfil their obligations. In addition to the financial mechanism mentioned earlier, developed country Parties may provide resources to improve implementation of the Convention through overseas development agencies and other bilateral channels.
The Convention explicitly recognizes that the extent to which developing country Parties will be able to implement their obligations under the Convention will depend on the developed country Parties fulfilling their obligations to provide resources. The Convention also acknowledges that economic and social development remains the overriding priority of developing countries, and in this regard recognizes the special circumstances and needs of the small island developing states. As a result of both these considerations, developed country Parties are expected to give due consideration to the dependence on, distribution and location of biological diversity within developing countries, in particular small island states and those that are most environmentally vulnerable, such as those with arid and semi-arid zones, coastal and mountainous areas.
ASSESSING IMPLEMENTATION OF THE CONVENTION
The Convention provides for Parties to present reports to the COP on measures taken to implement the provisions of the Convention and their effectiveness in meeting the objectives of the Convention (Article 26). At its second meeting, the COP decided that the first national reports should focus on implementation of Article 6 of the Convention. This article concerns the need to develop a national biodiversity strategy and action plan, and to ensure that the conservation and sustainable use of biological diversity is integrated with the policies and programmes of other sectors. The information in these reports was considered by the fourth meeting of the COP, which asked SBSTTA to give advice on the nature of the information required from Parties in order to assess the state of implementation of the Convention. A review of national implementation based on the information in the first national reports is contained in chapter 4.
At its fifth meeting, the COP adopted a methodology for national reporting that will enable Parties to provide information on the implementation of all their obligations, as derived from the articles of the Convention and from decisions of the COP that call for action by Parties. The reporting guidelines will permit Parties to consider the effectiveness of the measures taken and to identify national priorities, national capacity for implementation and constraints encountered. The COP will be able to identify issues that require further scientific or technical investigation, and to identify successes and constraints faced by Parties. In the latter case it will be better placed to decide what steps are necessary to support Parties, and to give appropriate guidance to the financial mechanism, institutions able to assist with capacity development, the Secretariat and to the Parties themselves.
Given the enormous breadth of the issues that the Convention seeks to address, there is need not only for cooperation between Parties, but also the need to develop institutional links and cooperative relationships with other international bodies. Mechanisms for coordinating these relationships are fundamental to the implementation of the Convention. Each meeting of the COP has reaffirmed the importance it attaches to cooperation and coordination between the Convention and other relevant conventions, institutions and processes, and has invited these to take an active rôle in the implementation of aspects of the Convention.
Equally importantly, the COP has reaffirmed the importance of the role to be played by groups other than States and international bodies. Non-state actors - national and international non-governmental organizations, scientific bodies, industrial and agricultural associations, and indigenous peoples' organizations, amongst others - have all been called upon to cooperate in scientific assessments, policy development, and implementation of the Convention's work programmes. In particular, as traditional knowledge about conserving and sustainably using biodiversity is central to the development and implementation of the work programmes, cooperation with the holders of traditional knowledge has been particularly emphasized.
The institutional structure of the Convention thus extends beyond those institutions established by the process itself. Cooperation is discussed in chapter 5.
1 The other three are climate change, international waters and depletion of the Earth's ozone layer. | http://www.cbd.int/gbo1/chap-02.shtml | 13 |
33 | The War of 1812 was fought between the United States of America and the British Empire - particularly Great Britain and the provinces of British North America, the antecedent of Canada. It lasted from 1812 to 1815. It was fought chiefly on the Atlantic Ocean and on the land, coasts and waterways of North America.
There were several immediate stated causes for the U.S. declaration of war. In 1807, Britain introduced a series of trade restrictions to impede American trade with France, a country with which Britain was at war. The United States contested these restrictions as illegal under international law. Both the impressment of American citizens into the Royal Navy, and Britain's military support of American Indians who were resisting the expansion of the American frontier into the Northwest further aggravated the relationship between the two countries. In addition, the United States sought to uphold national honor in the face of what they considered to be British insults, including the Chesapeake affair.
Indian raids hindered the expansion of United States into potentially valuable farmlands in the Northwest Territory, comprising the modern states of Ohio, Indiana, Illinois, Michigan, and Wisconsin. Some Canadian historians in the early 20th century maintained that Americans had wanted to seize parts of Canada, a view that many Canadians still share. Others argue that inducing the fear of such a seizure had merely been a U.S. tactic designed to obtain a bargaining chip. Some members of the British Parliament and dissident American politicians such as John Randolph of Roanoke claimed then that land hunger rather than maritime disputes was the main motivation for the American declaration. Although the British made some concessions before the war on neutral trade, they insisted on the right to reclaim their deserting sailors. The British also had the long-standing goal of creating a large "neutral" Indian state that would cover much of Ohio, Indiana and Michigan. They made the demand as late as 1814 at the peace conference, but lost battles that would have validated their claims.
The war was fought in four theaters. Warships and privateers of both sides attacked each other's merchant ships. The British blocked the Atlantic coast of the United States and mounted large-scale raids in the later stages of the war. Battles were also fought on the frontier, which ran along the Great Lakes and Saint Lawrence River and separated the United States from Upper and Lower Canada, and along the coast of the Gulf of Mexico. During the war, the Americans and British invaded each other's territory. These invasions were either unsuccessful or gained only temporary success. At the end of the war, the British held parts of Maine and some outposts in the sparsely populated West while the Americans held Canadian territory near Detroit, but these occupied territories were restored at the end of the war.
In the United States, battles such as New Orleans and the earlier successful defence of Baltimore (which inspired the lyrics of the U.S. national anthem, The Star-Spangled Banner) produced a sense of euphoria over a "second war of independence" against Britain. It ushered in an "Era of Good Feelings," in which the partisan animosity that had once verged on treason practically vanished. Canada also emerged from the war with a heightened sense of national feeling and solidarity. Britain, which had regarded the war as a sideshow to the Napoleonic Wars raging in Europe, was less affected by the fighting; its government and people subsequently welcomed an era of peaceful relations with the United States.
The war was fought between the United States and the British Empire, particularly Great Britain and her North American colonies of Upper Canada (Ontario), Lower Canada (Québec), New Brunswick, Newfoundland, Nova Scotia, Prince Edward Island, Cape Breton Island (then a separate colony from Nova Scotia), and Bermuda.
In May of 1812, William Hull lead an invading force of 2,000 soldiers across the Detroit River and occupied the Canadian town of Sandwich (now a neighborhood of Windsor, Ontario). British Major General Isaac Brock attacked the supply lines of the occupying force with a battle group comprised of British regulars, local militias, and Native Americans. By August, Hull and his troops (now numbering 2,500 with the addition of 500 Canadians) retreated to Detroit where, on August 16, Hull surrendered without a shot fired. The surrender cost the U.S. not only the city of Detroit, but the Michigan territory as well. Several months later the U.S. launched a second invasion of Canada, this time at the Niagara peninsula. On October 13, U.S. forces were again defeated at the Battle of Queenston Heights, where General Brock was killed.
The American strategy relied in part on state-raised militias, which had the deficiencies of poor training, resisting service or being incompetently led. Financial and logistical problems also plagued the American effort. Military and civilian leadership was lacking and remained a critical American weakness until 1814. New England opposed the war and refused to provide troops or financing. Britain had excellent financing and logistics, but the war with France had a higher priority, so in 1812–13, it adopted a defensive strategy. After the abdication of Napoleon in 1814, the British were able to send veteran armies to the U.S., but by then the Americans had learned how to mobilise and fight.
At sea, the powerful Royal Navy blockaded much of the coastline, though it was allowing substantial exports from New England, which traded with Britain and Canada in defiance of American laws. The blockade devastated American agricultural exports, but it helped stimulate local factories that replaced goods previously imported. The American strategy of using small gunboats to defend ports was a fiasco, as the British raided the coast at will. The most famous episode was a series of British raids on the shores of Chesapeake Bay, including an attack on Washington, D.C. that resulted in the British burning of the White House, the Capitol, the Navy Yard, and other public buildings, later called the "Burning of Washington." The British power at sea was sufficient to allow the Royal Navy to levy "contributions" on bayside towns in return for not burning them to the ground. The Americans were more successful in ship-to-ship actions, and built several fast frigates in its shipyard at Sackets Harbor, New York. They sent out several hundred privateers to attack British merchant ships; British commercial interests were damaged, especially in the West Indies.
The decisive use of naval power came on the Great Lakes and depended on a contest of building ships. In 1813, the Americans won control of Lake Erie and cut off British and Native American forces to the west from their supplies. Thus, the Americans gained one of their main objectives by breaking a confederation of tribes. Tecumseh, the leader of the tribal confederation, was killed at the Battle of the Thames. While some Natives continued to fight alongside British troops, they subsequently did so only as individual tribes or groups of warriors, and where they were directly supplied and armed by British agents. Control of Lake Ontario changed hands several times, with neither side able or willing to take advantage of the temporary superiority. The Americans ultimately gained control of Lake Champlain, and naval victory there forced a large invading British army to turn back in 1814.
Once Britain defeated France in 1814, it ended the trade restrictions and impressment of American sailors, thus removing another cause of the war. Great Britain and the United States agreed to a peace that left the prewar boundaries intact.
After two years of warfare, the major causes of the war had disappeared. Neither side had a reason to continue or a chance of gaining a decisive success that would compel their opponents to cede territory or advantageous peace terms. As a result of this stalemate, the two countries signed the Treaty of Ghent on December 24, 1814. News of the peace treaty took two months to reach the U.S., during which fighting continued. In this interim, the Americans defeated a British invasion army in the Battle of New Orleans, with American forces' sustaining 71 casualties compared with 2,000 British. The British went on to capture Fort Bowyer only to learn the next day of the war's end.
The war had the effect of uniting the populations within each country. Canadians celebrated the war as a victory because they avoided conquest. Americans celebrated victory personified in Andrew Jackson. He was the hero of the defence of New Orleans, and in 1828, was elected the 7th President of the United States.
On June 18, the United States declared war on Britain. The war had many causes, but at the centre of the conflict was Britain's ongoing war with Napoleon’s France. The British, said Jon Latimer in 2007, had only one goal: "Britain's sole objective throughout the period was the defeat of France." If America helped France, then America had to be damaged until she stopped, or "Britain was prepared to go to any lengths to deny neutral trade with France." Latimer concludes, "All this British activity seriously angered Americans."
The British were engaged in war with the First French Empire and did not wish to allow the Americans to trade with France, regardless of their theoretical neutral rights to do so. As Horsman explains, "If possible, England wished to avoid war with America, but not to the extent of allowing her to hinder the British war effort against France. Moreover… a large section of influential British opinion, both in the government and in the country, thought that America presented a threat to British maritime supremacy."
The United States Merchant Marine had come close to doubling between 1802 and 1810. Britain was the largest trading partner, receiving 80% of all U.S. cotton and 50% of all other U.S. exports. The United States Merchant Marine was the largest neutral fleet in the world by a large margin. The British public and press were resentful of the growing mercantile and commercial competition. The United States' view was that Britain was in violation of a neutral nation's right to trade with others it saw fit.
During the Napoleonic Wars, the Royal Navy expanded to 175 ships of the line and 600 ships overall, requiring 140,000 sailors. While the Royal Navy could man its ships with volunteers in peacetime, in war, it competed with merchant shipping and privateers for a small pool of experienced sailors and turned to impressment when it was unable to man ships with volunteers alone. A sizeable number of sailors (estimated to be as many as 11,000 in 1805) in the United States merchant navy were Royal Navy veterans or deserters who had left for better pay and conditions. The Royal Navy went after them by intercepting and searching U.S. merchant ships for deserters. Such actions, especially the Chesapeake-Leopard Affair, incensed the Americans.
The United States believed that British deserters had a right to become United States citizens. Britain did not recognise naturalised United States citizenship, so in addition to recovering deserters, it considered United States citizens born British liable for impressment. Exacerbating the situation was the widespread use of forged identity papers by sailors. This made it all the more difficult for the Royal Navy to distinguish Americans from non-Americans and led it to impress some Americans who had never been British. (Some gained freedom on appeal.) American anger at impressment grew when British frigates stationed themselves just outside U.S. harbors in U.S. territorial waters and searched ships for contraband and impressed men in view of U.S. shores. "Free trade and sailors' rights" was a rallying cry for the United States throughout the conflict.
American expansion into the Northwest Territory (the modern states of Ohio, Indiana, Michigan, Illinois and Wisconsin) was being obstructed by indigenous leaders like Tecumseh, supplied and encouraged by the British. Americans on the frontier demanded that interference be stopped. Before 1940, some historians held that United States expansionism into Canada was also a reason for the war. However, one subsequent historian wrote, "Almost all accounts of the 1811–1812 period have stressed the influence of a youthful band, denominated War Hawks, on Madison's policy. According to the standard picture, these men were a rather wild and exuberant group enraged by Britain's maritime practices, certain that the British were encouraging the Indians and convinced that Canada would be an easy conquest and a choice addition to the national domain. Like all stereotypes, there is some truth in this tableau; however, inaccuracies predominate. First, Perkins has shown that those favoring war were older than those opposed. Second, the lure of the Canadas has been played down by most recent investigators." Some Canadian historians propounded the notion in the early 20th century, and it survives in public opinion in Ontario. This view was also shared by a member of the British Parliament at the time.
Madison and his advisers believed that conquest of Canada would be easy and that economic coercion would force the British to come to terms by cutting off the food supply for their West Indies colonies. Furthermore, possession of Canada would be a valuable bargaining chip. Frontiersmen demanded the seizure of Canada not because they wanted the land, but because the British were thought to be arming the Indians and thereby blocking settlement of the West. As Horsman concluded, "The idea of conquering Canada had been present since at least 1807 as a means of forcing England to change her policy at sea. The conquest of Canada was primarily a means of waging war, not a reason for starting it." Hickey flatly stated, "The desire to annex Canada did not bring on the war." Brown (1964) concluded, "The purpose of the Canadian expedition was to serve negotiation, not to annex Canada." Burt, a leading Canadian scholar, agreed completely, noting that Foster—the British minister to Washington—also rejected the argument that annexation of Canada was a war goal.
The majority of the inhabitants of Upper Canada (Ontario) were either exiles from the United States (United Empire Loyalists) or postwar immigrants. The Loyalists were hostile to union with the U.S., while the other settlers seem to have been uninterested. The Canadian colonies were thinly populated and only lightly defended by the British Army. Americans then believed that many in Upper Canada would rise up and greet a United States invading army as liberators, which did not happen. One reason American forces retreated after one successful battle inside Canada was that they could not obtain supplies from the locals. But the possibility of local assistance suggested an easy conquest, as former President Thomas Jefferson seemed to believe in 1812: "The acquisition of Canada this year, as far as the neighborhood of Quebec, will be a mere matter of marching, and will give us the experience for the attack on Halifax, the next and final expulsion of England from the American continent."
The declaration of war was passed by the smallest margin recorded on a war vote in the United States Congress. On May 11, Prime Minister Spencer Perceval was shot and killed by an assassin, resulting in a change of the British government, putting Lord Liverpool in power. Liverpool wanted a more practical relationship with the United States. He issued a repeal of the Orders in Council, but the U.S. was unaware of this, as it took three weeks for the news to cross the Atlantic.
Although the outbreak of the war had been preceded by years of angry diplomatic dispute, neither side was ready for war when it came. Britain was heavily engaged in the Napoleonic Wars, most of the British Army was engaged in the Peninsular War (in Spain), and the Royal Navy was compelled to blockade most of the coast of Europe. The number of British regular troops present in Canada in July 1812 was officially stated to be 6,034, supported by Canadian militia. Throughout the war, the British Secretary of State for War and the Colonies was the Earl of Bathurst. For the first two years of the war, he could spare few troops to reinforce North America and urged the commander in chief in North America (Lieutenant General Sir George Prevost) to maintain a defensive strategy. The naturally cautious Prevost followed these instructions, concentrating on defending Lower Canada at the expense of Upper Canada (which was more vulnerable to American attacks) and allowing few offensive actions. In the final year of the war, large numbers of British soldiers became available after the abdication of Napoleon Bonaparte. Prevost launched an offensive of his own into Upper New York State, but mishandled it and was forced to retreat after the British lost the Battle of Plattsburgh.
The United States was not prepared to prosecute a war, for President Madison assumed that the state militias would easily seize Canada and negotiations would follow. In 1812, the regular army consisted of fewer than 12,000 men. Congress authorised the expansion of the army to 35,000 men, but the service was voluntary and unpopular, it offered poor pay, and there were very few trained and experienced officers, at least initially. The militia called in to aid the regulars objected to serving outside their home states, were not amenable to discipline, and performed poorly in the presence of the enemy when outside of their home state. The U.S. had great difficulty financing its war. It had disbanded its national bank, and private bankers in the Northeast were opposed to the war.
The early disasters brought about chiefly by American unpreparedness and lack of leadership drove United States Secretary of War William Eustis from office. His successor, John Armstrong, Jr., attempted a coordinated strategy late in 1813 aimed at the capture of Montreal, but was thwarted by logistical difficulties, uncooperative and quarrelsome commanders and ill-trained troops. By 1814, the United States Army's morale and leadership had greatly improved, but the embarrassing Burning of Washington led to Armstrong's dismissal from office in turn. The war ended before the new Secretary of War James Monroe could put a new strategy into effect.
American prosecution of the war also suffered from its unpopularity, especially in New England, where antiwar spokesmen were vocal. The failure of New England to provide militia units or financial support was a serious blow. Threats of secession by New England states were loud; Britain immediately exploited these divisions, blockading only southern ports for much of the war and encouraging smuggling.
The war was conducted in three theatres of operations:
In 1812, Britain's Royal Navy was the world's largest, with over 600 cruisers in commission, plus a number of smaller vessels. Although most of these were involved in blockading the French navy and protecting British trade against (usually French) privateers, the Royal Navy nevertheless had 85 vessels in American waters. By contrast, the United States Navy comprised only 8 frigates, 14 smaller sloops and brigs, and no ships of the line whatsoever. However some American frigates were exceptionally large and powerful for their class. Whereas the standard British frigate of the time was rated as a 38 gun ship, with its main battery consisting of 18-pounder guns, the USS Constitution, USS President, and USS United States were rated as 44-gun ships and were capable of carrying 56 guns, with a main battery of 24-pounders.
The British strategy was to protect their own merchant shipping to and from Halifax, Canada and the West Indies, and to enforce a blockade of major American ports to restrict American trade. Because of their numerical inferiority, the Americans aimed to cause disruption through hit-and-run tactics, such as the capture of prizes and engaging Royal Navy vessels only under favorable circumstances. Days after the formal declaration of war, however, two small squadrons sailed, including the frigate USS President and the sloop USS Hornet under Commodore John Rodgers, and the frigates USS United States and USS Congress, with the brig USS Argus under Captain Stephen Decatur. These were initially concentrated as one unit under Rodgers, and it was his intention to force the Royal Navy to concentrate its own ships to prevent isolated units being captured by his powerful force. Large numbers of American merchant ships were still returning to the United States, and if the Royal Navy was concentrated, it could not watch all the ports on the American seaboard. Rodgers' strategy worked, in that the Royal Navy concentrated most of its frigates off New York Harbor under Captain Philip Broke and allowed many American ships to reach home. However, his own cruise captured only five small merchant ships, and the Americans never subsequently concentrated more than two or three ships together as a unit.
Meanwhile, the USS Constitution, commanded by Captain Isaac Hull, sailed from Chesapeake Bay on July 12. On July 17, Broke's British squadron gave chase off New York, but the Constitution evaded her pursuers after two days. After briefly calling at Boston to replenish water, on August 19, the Constitution engaged the British frigate HMS Guerriere. After a 35-minute battle, Guerriere had been dismasted and captured and was later burned. Hull returned to Boston with news of this significant victory. On October 25, the USS United States, commanded by Captain Decatur, captured the British frigate HMS Macedonian, which he then carried back to port. At the close of the month, the Constitution sailed south, now under the command of Captain William Bainbridge. On December 29, off Bahia, Brazil, she met the British frigate HMS Java. After a battle lasting three hours, Java struck her colours and was burned after being judged unsalvageable. The USS Constitution, however, was undamaged in the battle and earned the name "Old Ironsides."
The successes gained by the three big American frigates forced Britain to construct five 40-gun, 24-pounder heavy frigates and two of its own 50-gun "spar-decked" frigates (HMS Leander and HMS Newcastle) and to razee three old 74-gun ships of the line to convert them to heavy frigates. The Royal Navy acknowledged that there were factors other than greater size and heavier guns. The United States Navy's sloops and brigs had also won several victories over Royal Navy vessels of approximately equal strength. While the American ships had experienced and well-drilled volunteer crews, the enormous size of the overstretched Royal Navy meant that many ships were shorthanded and the average quality of crews suffered, and constant sea duties of those serving in North America interfered with their training and exercises.
The capture of the three British frigates stimulated the British to greater exertions. More vessels were deployed on the American seaboard and the blockade tightened. On June 1, 1813, off Boston Harbor, the frigate USS Chesapeake, commanded by Captain James Lawrence, was captured by the British frigate HMS Shannon under Captain Sir Philip Broke. Lawrence was mortally wounded and famously cried out, "Don't give up the ship! Hold on, men!" Although the Chesapeake was only of equal strength to the average British frigate and the crew had mustered together only hours before the battle, the British press reacted with almost hysterical relief that the run of American victories had ended. It should be noted that this single victory was by ratio one of the bloodiest contests recorded during this age of sail with more dead and wounded than the HMS Victory suffered in 4 hours of combat at Trafalgar. Captain Lawrence was killed and Captain Broke would never again hold a sea command due to wounds.
In January 1813, the American frigate USS Essex, under the command of Captain David Porter, sailed into the Pacific in an attempt to harass British shipping. Many British whaling ships carried letters of marque allowing them to prey on American whalers, and nearly destroyed the industry. The Essex challenged this practice. She inflicted considerable damage on British interests before she was captured off Valparaiso, Chile by the British frigate HMS Phoebe and the sloop HMS Cherub on March 28, 1814.
The British 6th-rate Cruizer class brig-sloops did not fare well against the American ship-rigged sloops of war. The USS Hornet and USS Wasp constructed before the war were notably powerful vessels, and the Frolic class built during the war even more so (although USS Frolic was trapped and captured by a British frigate and a schooner). The British brig-rigged sloops tended to suffer fire to their rigging far worse than the American ship-rigged sloops, while the ship-rigged sloops could back their sails in action, giving them another advantage in manoeuvering.
Following their earlier losses, the British Admiralty instituted a new policy that the three American heavy frigates should not be engaged except by a ship of the line or smaller vessels in squadron strength. An example of this was the capture of the USS President by a squadron of four British frigates in January 1815 (although the action was fought on the British side mainly by HMS Endymion). A month later, however, the USS Constitution managed to engage and capture two smaller British warships, HMS Cyane and HMS Levant, sailing in company.
The blockade of American ports later tightened to the extent that most American merchant ships and naval vessels were confined to port. The American frigates USS United States and USS Macedonian ended the war blockaded and hulked in New London, Connecticut. Some merchant ships were based in Europe or Asia and continued operations. Others, mainly from New England, were issued licenses to trade by Admiral Sir John Borlase Warren, commander in chief on the American station in 1813. This allowed Wellington's army in Spain to receive American goods and to maintain the New Englanders' opposition to the war. The blockade nevertheless resulted in American exports decreasing from $130-million in 1807 to $7-million in 1814.
The operations of American privateers (some of which belonged to the United States Navy, but most of which were private ventures) were extensive. They continued until the close of the war and were only partially affected by the strict enforcement of convoy by the Royal Navy. An example of the audacity of the American cruisers was the depredations in British home waters carried out by the American sloop USS Argus. It was eventually captured off St. David's Head in Wales by the British brig HMS Pelican on August 14, 1813. A total of 1,554 vessels were claimed captured by all American naval and privateering vessels, 1300 of which were captured by privateers. However, insurer Lloyd's of London reported that only 1,175 British ships were taken, 373 of which were recaptured, for a total loss of 802.
As the Royal Navy base that supervised the blockade, the Halifax profited greatly during the war. British privateers based there seized many French and American ships and sold their prizes in Halifax.
The war was the last time the British allowed privateering, since the practice was coming to be seen as politically inexpedient and of diminishing value in maintaining its naval supremacy. It was the swan song of Bermuda's privateers, who had vigorously returned to the practice after American lawsuits had put a stop to it two decades earlier. The nimble Bermuda sloops captured 298 enemy ships. British naval and privateering vessels between the Great Lakes and the West Indies captured 1,593.
Preoccupied in their pursuit of American privateers when the war began, the British naval forces had some difficulty in blockading the entire U.S. coast. The British government, having need of American foodstuffs for its army in Spain, benefited from the willingness of the New Englanders to trade with them, so no blockade of New England was at first attempted. The Delaware River and Chesapeake Bay were declared in a state of blockade on December 26, 1812.
This was extended to the coast south of Narragansett by November 1813 and to the entire American coast on May 31, 1814. In the meantime, illicit trade was carried on by collusive captures arranged between American traders and British officers. American ships were fraudulently transferred to neutral flags. Eventually, the U.S. government was driven to issue orders to stop illicit trading; this put only a further strain on the commerce of the country. The overpowering strength of the British fleet enabled it to occupy the Chesapeake and to attack and destroy numerous docks and harbors.
Additionally, commanders of the blockading fleet, based at the Bermuda dockyard, were given instructions to encourage the defection of American slaves by offering freedom, as they did during the Revolutionary War. Thousands of black slaves went over to the Crown with their families and were recruited into the 3rd (Colonial) Battalion of the Royal Marines on occupied Tangier Island, in the Chesapeake. A further company of colonial marines was raised at the Bermuda dockyard, where many freed slaves—men, women, and children—had been given refuge and employment. It was kept as a defensive force in case of an attack. These former slaves fought for Britain throughout the Atlantic campaign, including the attack on Washington, D.C. and the Louisiana Campaign, and most were later re-enlisted into British West India regiments or settled in Trinidad in August 1816, where seven hundred of these ex-marines were granted land (they reportedly organised in villages along the lines of military companies). Many other freed American slaves were recruited directly into West Indian regiments or newly created British Army units. A few thousand freed slaves were later settled at Nova Scotia by the British.
Maine, then part of Massachusetts, was a base for smuggling and illegal trade between the U.S. and the British. From his base in New Brunswick, in September 1814, Sir John Coape Sherbrooke led 500 British troops in the "Penobscot Expedition". In 26 days, he raided and looted Hampden, Bangor, and Machias, destroying or capturing 17 American ships. He won the Battle of Hampden (losing two killed while the Americans lost one killed) and occupied the village of Castine for the rest of the war. The Treaty of Ghent returned this territory to the United States. The British left in April 1815, at which time they took 10,750 pounds obtained from tariff duties at Castine. This money, called the "Castine Fund", was used in the establishment of Dalhousie University, in Halifax, Nova Scotia.
The strategic location of the Chesapeake Bay near America's capital made it a prime target for the British. Starting in March 1813, a squadron under Rear Admiral George Cockburn started a blockade of the bay and raided towns along the bay from Norfolk to Havre de Grace.
On July 4, 1813, Joshua Barney, a Revolutionary War naval hero, convinced the Navy Department to build the Chesapeake Bay Flotilla, a squadron of twenty barges to defend the Chesapeake Bay. Launched in April 1814, the squadron was quickly cornered in the Patuxent River, and while successful in harassing the Royal Navy, they were powerless to stop the British campaign that ultimately led to the "Burning of Washington." This expedition, led by Cockburn and General Robert Ross, was carried out between August 19 and 29, 1814, as the result of the hardened British policy of 1814 (although British and American commissioners had convened peace negotiations at Ghent in June of that year). As part of this, Admiral Warren had been replaced as commander in chief by Admiral Alexander Cochrane, with reinforcements and orders to coerce the Americans into a favourable peace.
Governor-in-chief of British North America Sir George Prevost had written to the Admirals in Bermuda, calling for retaliation for the American sacking of York (now Toronto). A force of 2,500 soldiers under General Ross—aboard a Royal Navy task force composed of the HMS Royal Oak, three frigates, three sloops, and ten other vessels—had just arrived in Bermuda. Released from the Peninsular War by British victory, the British intended to use them for diversionary raids along the coasts of Maryland and Virginia. In response to Prevost's request, they decided to employ this force, together with the naval and military units already on the station, to strike at Washington, D.C.
On August 24, U.S. Secretary of War John Armstrong insisted that the British would attack Baltimore rather than Washington, even when the British army was obviously on its way to the capital. The inexperienced American militia, which had congregated at Bladensburg, Maryland, to protect the capital, was routed in the Battle of Bladensburg, opening the route to Washington. While Dolley Madison saved valuables from the Presidential Mansion, President James Madison was forced to flee to Virginia.
The British commanders ate the supper that had been prepared for the President before they burned the Presidential Mansion; American morale was reduced to an all-time low. The British viewed their actions as retaliation for destructive American raids into Canada, most notably the Americans' burning of York (now Toronto) in 1813. Later that same evening, a furious storm swept into Washington, D.C., sending one or more tornadoes into the city that caused more damage but finally extinguished the fires with torrential rains. The naval yards were set afire at the direction of U.S. officials to prevent the capture of naval ships and supplies. The British left Washington, D.C. as soon as the storm subsided. Having destroyed Washington's public buildings, including the President's Mansion and the Treasury, the British army next moved to capture Baltimore, a busy port and a key base for American privateers. The subsequent Battle of Baltimore began with the British landing at North Point, where they were met by American militia. An exchange of fire began, with casualties on both sides. General Ross was killed by an American sniper as he attempted to rally his troops. The sniper himself was killed moments later, and the British withdrew. The British also attempted to attack Baltimore by sea on September 13 but were unable to reduce Fort McHenry, at the entrance to Baltimore Harbor.
The Battle of Fort McHenry was no battle at all. British guns had range on American cannon, and stood off out of U.S. range, bombarding the fort, which returned no fire. Their plan was to coordinate with a land force, but from that distance coordination proved impossible, so the British called off the attack and left. All the lights were extinguished in Baltimore the night of the attack, and the fort was bombarded for 25 hours. The only light was given off by the exploding shells over Fort McHenry, illuminating the flag that was still flying over the fort. The defence of the fort inspired the American lawyer Francis Scott Key to write a poem that would eventually supply the lyrics to "The Star-Spangled Banner."
American leaders assumed that Canada could be easily overrun. Former President Jefferson optimistically referred to the conquest of Canada as "a matter of marching." Many Loyalist Americans had migrated to Upper Canada after the Revolutionary War, and it was assumed they would favor the American cause, but they did not. In prewar Upper Canada, General Prevost found himself in the unusual position of purchasing many provisions for his troops from the American side. This peculiar trade persisted throughout the war in spite of an abortive attempt by the American government to curtail it. In Lower Canada, much more populous, support for Britain came from the English elite with strong loyalty to the Empire, and from the French elite, who feared American conquest would destroy the old order by introducing Protestantism and weakening the Catholic Church, Anglicization, republican democracy, and commercial capitalism. The French inhabitants feared the loss to potential American immigrants of a shrinking area of good lands.
In 1812–13, British military experience prevailed over inexperienced American commanders. Geography dictated that operations would take place in the west: principally around Lake Erie, near the Niagara River between Lake Erie and Lake Ontario, and near the Saint Lawrence River area and Lake Champlain. This was the focus of the three-pronged attacks by the Americans in 1812. Although cutting the St. Lawrence River through the capture of Montreal and Quebec would have made Britain's hold in North America unsustainable, the United States began operations first in the western frontier because of the general popularity there of a war with the British, who had sold arms to the American natives opposing the settlers.
The British scored an important early success when their detachment at St. Joseph Island, on Lake Huron, learned of the declaration of war before the nearby American garrison at the important trading post at Mackinac Island, in Michigan. A scratch force landed on the island on July 17, 1812, and mounted a gun overlooking Fort Mackinac. After the British fired one shot from their gun, the Americans, taken by surprise, surrendered. This early victory encouraged the natives, and large numbers of them moved to help the British at Amherstburg.
An American army under the command of William Hull invaded Canada on July 12, with his forces chiefly composed of militiamen. Once on Canadian soil, Hull issued a proclamation ordering all British subjects to surrender, or "the horrors, and calamities of war will stalk before you." He also threatened to kill any British prisoner caught fighting alongside a native. The proclamation helped stiffen resistance to the American attacks. The senior British officer in Upper Canada, Major General Isaac Brock, decided to oppose Hull's forces, and felt that he should make a bold action to calm the settler population in Canada, and to try and convince the aboriginals that were needed to defend the region that Britain was strong. Hull was worried that his army was too weak to achieve its objectives, and engaged in minor skirmishing and felt more vulnerable after the British captured a vessel on Lake Erie carrying his baggage, medical supplies, and important papers. On July 17, without a fight, the American fort on Mackinac Island surrendered after a group of soldiers, fur traders, and native warriors ordered by Brock to capture the settlement deployed a piece of artillery overlooking the post before the fort realised it, which led to its capitulation. This capture secured British fur trade operations in the area and maintained a British connection to the Native American tribes in the Mississippi region, as well as inspiring a sizeable number of Natives of the upper lakes region to combat the United States. Hull, believing after he learned about the capture that the tribes along the Detroit border would rise up and oppose him and perhaps attack Americans on the frontier, on August 8 withdrew most of his army from Canada back to secure Detroit whilst sending a request for reinforcements and ordering the American garrison at Fort Dearborn to abandon the post for fear of an aboriginal attack.
Brock advanced on Fort Detroit with 1,200 men. Brock sent a fake correspondence and allowed the letter to be captured by the Americans, saying they required only 5,000 Native warriors to capture Detroit. Hull feared the natives and their threats of torture and scalping. Believing the British had more troops than they did, Hull surrendered at Detroit without a fight on August 16. Fearing British-instigated indigenous attacks on other locations, Hull ordered the evacuation of the inhabitants of Fort Dearborn (Chicago) to Fort Wayne. After initially being granted safe passage, the inhabitants (soldiers and civilians) were attacked by Potowatomis on August 15 after traveling two miles (3 km) in what is known as the Battle of Fort Dearborn. The fort was subsequently burned.
Brock promptly transferred himself to the eastern end of Lake Erie, where American General Stephen Van Rensselaer was attempting a second invasion. An armistice (arranged by Prevost in the hope the British renunciation of the Orders in Council to which the United States objected might lead to peace) prevented Brock from invading American territory. When the armistice ended, the Americans attempted an attack across the Niagara River on October 13, but suffered a crushing defeat at Queenston Heights. Brock was killed during the battle. While the professionalism of the American forces would improve by the war's end, British leadership suffered after Brock's death. A final attempt in 1812 by American General Henry Dearborn to advance north from Lake Champlain failed when his militia refused to advance beyond American territory.
In contrast to the American militia, the Canadian militia performed well. French Canadians, who found the anti-Catholic stance of most of the United States troublesome, and United Empire Loyalists, who had fought for the Crown during the American Revolutionary War, strongly opposed the American invasion. However, many in Upper Canada were recent settlers from the United States who had no obvious loyalties to the Crown. Nevertheless, while there were some who sympathised with the invaders, the American forces found strong opposition from men loyal to the Empire.
After Hull's surrender of Detroit, General William Henry Harrison was given command of the U.S. Army of the Northwest. He set out to retake the city, which was now defended by Colonel Henry Procter in conjunction with Tecumseh. A detachment of Harrison's army was defeated at Frenchtown along the River Raisin on January 22, 1813. Procter left the prisoners with an inadequate guard, who could not prevent some of his North American aboriginal allies from attacking and killing perhaps as many as sixty Americans, many of whom were Kentucky militiamen. The incident became known as the "River Raisin Massacre." The defeat ended Harrison's campaign against Detroit, and the phrase "Remember the River Raisin!" became a rallying cry for the Americans.
In May 1813, Procter and Tecumseh set siege to Fort Meigs in northern Ohio. American reinforcements arriving during the siege were defeated by the natives, but the fort held out. The Indians eventually began to disperse, forcing Procter and Tecumseh to return to Canada. A second offensive against Fort Meigs also failed in July. In an attempt to improve Indian morale, Procter and Tecumseh attempted to storm Fort Stephenson, a small American post on the Sandusky River, only to be repulsed with serious losses, marking the end of the Ohio campaign.
On Lake Erie, American commander Captain Oliver Hazard Perry fought the Battle of Lake Erie on September 10, 1813. His decisive victory ensured American control of the lake, improved American morale after a series of defeats, and compelled the British to fall back from Detroit. This paved the way for General Harrison to launch another invasion of Upper Canada, which culminated in the U.S. victory at the Battle of the Thames on October 5, 1813, in which Tecumseh was killed. Tecumseh's death effectively ended the North American indigenous alliance with the British in the Detroit region. American control of Lake Erie meant the British could no longer provide essential military supplies to their aboriginal allies, who therefore dropped out of the war. The Americans controlled the area during the war.
Because of the difficulties of land communications, control of the Great Lakes and the St. Lawrence River corridor was crucial. When the war began, the British already had a small squadron of warships on Lake Ontario and had the initial advantage. To redress the situation, the Americans established a Navy yard at Sackett's Harbor, New York. Commodore Isaac Chauncey took charge of the large number of sailors and shipwrights sent there from New York; they completed the second warship built there in a mere 45 days. Ultimately, 3000 men worked at the shipyard, building eleven warships and many smaller boats and transports. Having regained the advantage by their rapid building program, Chauncey and Dearborn attacked York (now called Toronto), the capital of Upper Canada, on April 27, 1813. The Battle of York was an American victory, marred by looting and the burning of the Parliament buildings and a library. However, Kingston was strategically more valuable to British supply and communications along the St. Lawrence. Without control of Kingston, the U.S. navy could not effectively control Lake Ontario or sever the British supply line from Lower Canada.
On May 27, 1813, an American amphibious force from Lake Ontario assaulted Fort George on the northern end of the Niagara River and captured it without serious losses. The retreating British forces were not pursued, however, until they had largely escaped and organised a counteroffensive against the advancing Americans at the Battle of Stoney Creek on June 5. On June 24, with the help of advance warning by Loyalist Laura Secord, another American force was forced to surrender by a much smaller British and native force at the Battle of Beaver Dams, marking the end of the American offensive into Upper Canada. Meanwhile, Commodore James Lucas Yeo had taken charge of the British ships on the lake and mounted a counterattack, which was nevertheless repulsed at the Battle of Sackett's Harbor. Thereafter, Chauncey and Yeo's squadrons fought two indecisive actions, neither commander seeking a fight to the finish.
Late in 1813, the Americans abandoned the Canadian territory they occupied around Fort George. They set fire to the village of Newark (now Niagara-on-the-Lake) on December 15, 1813, incensing the British and Canadians. Many of the inhabitants were left without shelter, freezing to death in the snow. This led to British retaliation following the Capture of Fort Niagara on December 18, 1813, and similar destruction at Buffalo on December 30, 1813.
In 1814, the contest for Lake Ontario turned into a building race. Eventually, by the end of the year, Yeo had constructed the HMS St. Lawrence, a first-rate ship of the line of 112 guns that gave him superiority, but the Engagements on Lake Ontario were an indecisive draw.
The British were potentially most vulnerable over the stretch of the St. Lawrence where it formed the frontier between Upper Canada and the United States. During the early days of the war, there was illicit commerce across the river. Over the winter of 1812 and 1813, the Americans launched a series of raids from Ogdensburg on the American side of the river, which hampered British supply traffic up the river. On February 21, Sir George Prevost passed through Prescott on the opposite bank of the river with reinforcements for Upper Canada. When he left the next day, the reinforcements and local militia attacked. At the Battle of Ogdensburg, the Americans were forced to retire.
For the rest of the year, Ogdensburg had no American garrison, and many residents of Ogdensburg resumed visits and trade with Prescott. This British victory removed the last American regular troops from the Upper St. Lawrence frontier and helped secure British communications with Montreal. Late in 1813, after much argument, the Americans made two thrusts against Montreal. The plan eventually agreed upon was for Major General Wade Hampton to march north from Lake Champlain and join a force under General James Wilkinson that would embark in boats and sail from Sackett's Harbor on Lake Ontario and descend the St. Lawrence. Hampton was delayed by bad roads and supply problems and also had an intense dislike of Wilkinson, which limited his desire to support his plan. On October 25, his 4,000-strong force was defeated at the Chateauguay River by Charles de Salaberry's smaller force of French-Canadian Voltigeurs and Mohawks. Wilkinson's force of 8,000 set out on October 17, but was also delayed by bad weather. After learning that Hampton had been checked, Wilkinson heard that a British force under Captain William Mulcaster and Lieutenant Colonel Joseph Wanton Morrison was pursuing him, and by November 10, he was forced to land near Morrisburg, about 150 kilometers (90 mi.) from Montreal. On November 11, Wilkinson's rear guard, numbering 2,500, attacked Morrison's force of 800 at Crysler's Farm and was repulsed with heavy losses. After learning that Hampton could not renew his advance, Wilkinson retreated to the U.S. and settled into winter quarters. He resigned his command after a failed attack on a British outpost at Lacolle Mills.
By the middle of 1814, American generals, including Major Generals Jacob Brown and Winfield Scott, had drastically improved the fighting abilities and discipline of the army. Their renewed attack on the Niagara peninsula quickly captured Fort Erie. Winfield Scott then gained a victory over an inferior British force at the Battle of Chippawa on July 5. An attempt to advance further ended with a hard-fought but inconclusive battle at Lundy's Lane on July 25.
The outnumbered Americans withdrew but withstood a prolonged Siege of Fort Erie. The British suffered heavy casualties in a failed assault and were weakened by exposure and shortage of supplies in their siege lines. Eventually the British raised the siege, but American Major General George Izard took over command on the Niagara front and followed up only halfheartedly. The Americans lacked provisions, and eventually destroyed the fort and retreated across the Niagara.
Meanwhile, following the abdication of Napoleon, 15,000 British troops were sent to North America under four of Wellington’s ablest brigade commanders. Fewer than half were veterans of the Peninsula and the rest came from garrisons. Along with the troops came instructions for offensives against the United States. British strategy was changing, and like the Americans, the British were seeking advantages for the peace negotiations. Governor-General Sir George Prevost was instructed to launch an invasion into the New York–Vermont region. The army available to him outnumbered the American defenders of Plattsburgh, but control of this town depended on being able to control Lake Champlain. On the lake, the British squadron under Captain George Downie and the Americans under Master Commandant Thomas MacDonough were more evenly matched.
On reaching Plattsburgh, Prevost delayed the assault until the arrival of Downie in the hastily completed 36-gun frigate HMS Confiance. Prevost forced Downie into a premature attack, but then unaccountably failed to provide the promised military backing. Downie was killed and his naval force defeated at the naval Battle of Plattsburgh in Plattsburgh Bay on September 11, 1814. The Americans now had control of Lake Champlain; Theodore Roosevelt later termed it "the greatest naval battle of the war." The successful land defence was led by Alexander Macomb. To the astonishment of his senior officers, Prevost then turned back, saying it would be too hazardous to remain on enemy territory after the loss of naval supremacy. Prevost's political and military enemies forced his recall. In London, a naval court-martial of the surviving officers of the Plattsburgh Bay debacle decided that defeat had been caused principally by Prevost’s urging the squadron into premature action and then failing to afford the promised support from the land forces. Prevost died suddenly, just before his own court-martial was to convene. Prevost's reputation sank to a new low, as Canadians claimed that their militia under Brock did the job and he failed. Recently, however, historians have been more kindly, measuring him not against Wellington but against his American foes. They judge Prevost’s preparations for defending the Canadas with limited means to be energetic, well-conceived, and comprehensive; and against the odds, he had achieved the primary objective of preventing an American conquest.
Far to the west of where regular British forces were fighting, more than 65 forts were built in the Illinois Territory, mostly by American settlers. Skirmishes between settlers and U.S. soldiers against natives allied to the British occurred throughout the Mississippi River valley during the war. The Sauk were considered the most formidable tribe.
At the beginning of the war, Fort Osage, the westernmost U.S. outpost along the Missouri River, was abandoned. In September 1813, Fort Madison, an American outpost in what is now Iowa, was abandoned after it was attacked and besieged by natives, who had support from the British. This was one of the few battles fought west of the Mississippi. Black Hawk participated in the siege of Fort Madison, which helped to form his reputation as a resourceful Sauk leader.
Little of note took place on Lake Huron in 1813, but the American victory on Lake Erie and the recapture of Detroit isolated the British there. During the ensuing winter, a Canadian party under Lieutenant Colonel Robert McDouall established a new supply line from York to Nottawasaga Bay on Georgian Bay. When he arrived at Fort Mackinac with supplies and reinforcements, he sent an expedition to recapture the trading post of Prairie du Chien in the far west. The Siege of Prairie du Chien ended in a British victory on July 20, 1814.
Earlier in July, the Americans sent a force of five vessels from Detroit to recapture Mackinac. A mixed force of regulars and volunteers from the militia landed on the island on August 4. They did not attempt to achieve surprise, and at the brief Battle of Mackinac Island, they were ambushed by natives and forced to re-embark. The Americans discovered the new base at Nottawasaga Bay, and on August 13, they destroyed its fortifications and a schooner that they found there. They then returned to Detroit, leaving two gunboats to blockade Mackinac. On September 4, these gunboats were taken unawares and captured by enemy boarding parties from canoes and small boats. This Engagement on Lake Huron left Mackinac under British control.
The British garrison at Prairie du Chien also fought off another attack by Major Zachary Taylor. In this distant theatre, the British retained the upper hand until the end of the war, through the allegiance of several indigenous tribes that received British gifts and arms. In 1814 U.S. troops retreating from the Battle of Credit Island on the upper Mississippi attempted to make a stand at Fort Johnson, but the fort was soon abandoned, along with most of the upper Mississippi valley.
After the U.S. was pushed out of the Upper Mississippi region, they held on to eastern Missouri and the St. Louis area. Two notable battles fought against the Sauk were the Battle of Cote Sans Dessein, in April 1815, at the mouth of the Osage River in the Missouri Territory, and the Battle of the Sink Hole, in May 1815, near Fort Cap au Gris.
At the conclusion of peace, Mackinac and other captured territory was returned to the United States. Fighting between Americans, the Sauk, and other indigenous tribes continued through 1817, well after the war ended in the east.
In March 1814, Jackson led a force of Tennessee militia, Choctaw, Cherokee warriors, and U.S. regulars southward to attack the Creek tribes, led by Chief Menawa. On March 26, Jackson and General John Coffee decisively defeated the Creek at Horseshoe Bend, killing 800 of 1,000 Creeks at a cost of 49 killed and 154 wounded out of approximately 2,000 American and Cherokee forces. Jackson pursued the surviving Creek until they surrendered. Most historians consider the Creek War as part of the War of 1812, because the British supported them.
By 1814, both sides, weary of a costly war that seemingly offered nothing but stalemate, were ready to grope their way to a settlement and sent delegates to Ghent, Belgium. The negotiations began in early August and dragged on until Dec. 24, when a final agreement was signed; both sides had to ratify it before it could take effect. Meanwhile both sides planned new invasions.
It is difficult to measure accurately the costs of the American war to Britain, because they are bound up in general expenditure on the Napoleonic War in Europe. But an estimate may be made based on the increased borrowing undertaken during the period, with the American war as a whole adding some £25 million to the national debt. In the U.S., the cost was $105 million, although because the British pound was worth considerably more than the dollar, the costs of the war to both sides were roughly equal. The national debt rose from $45 million in 1812 to $127 million by the end of 1815, although by selling bonds and treasury notes at deep discounts—and often for irredeemable paper money due to the suspension of specie payment in 1814—the government received only $34 million worth of specie. By this time, the British blockade of U.S. ports was having a detrimental effect on the American economy. Licensed flour exports, which had been close to a million barrels in 1812 and 1813, fell to 5,000 in 1814. By this time, insurance rates on Boston shipping had reached 75%, coastal shipping was at a complete standstill, and New England was considering secession. Exports and imports fell dramatically as American shipping engaged in foreign trade dropped from 948,000 tons in 1811 to just 60,000 tons by 1814. But although American privateers found chances of success much reduced, with most British merchantmen now sailing in convoy, privateering continued to prove troublesome to the British. With insurance rates between Liverpool, England and Halifax, Nova Scotia rising to 30%, the Morning Chronicle complained that with American privateers operating around the British Isles, "We have been insulted with impunity." The British could not fully celebrate a great victory in Europe until there was peace in North America, and more pertinently, taxes could not come down until such time. Landowners particularly balked at continued high taxation; both they and the shipping interests urged the government to secure peace.
Britain, which had forces in uninhabited areas near Lake Superior and Lake Michigan and two towns in Maine, demanded the ceding of large areas, plus turning most of the Midwest into a neutral zone for Indians. American public opinion was outraged when Madison published the demands; even the Federalists were now willing to fight on. The British were planning three invasions. One force burned Washington but failed to capture Baltimore, and sailed away when its commander was killed. In New York, 10,000 British veterans were marching south until a decisive defeat at the Battle of Plattsburgh forced them back to Canada. Nothing was known of the fate of the third large invasion force aimed at capturing New Orleans and southwest. The Prime Minister wanted the Duke of Wellington to command in Canada and finally win the war; Wellington said no, because the war was a military stalemate and should be promptly ended:
I think you have no right, from the state of war, to demand any concession of territory from America ... You have not been able to carry it into the enemy's territory, notwithstanding your military success and now undoubted military superiority, and have not even cleared your own territory on the point of attack. You can not on any principle of equality in negotiation claim a cessation of territory except in exchange for other advantages which you have in your power ... Then if this reasoning be true, why stipulate for the uti possidetis? You can get no territory: indeed, the state of your military operations, however creditable, does not entitle you to demand any.
With a rift opening between Britain and Russia at the Congress of Vienna and little chance of improving the military situation in North America, Britain was prepared to end the war promptly. In concluding the war, the Prime Minister, Lord Liverpool, was taking into account domestic opposition to continued taxation, especially among Liverpool and Bristol merchants—keen to get back to doing business with America—and there was nothing to gain from prolonged warfare.
On December 24, 1814, diplomats from the two countries, meeting in Ghent, United Kingdom of the Netherlands (now in Belgium), signed the Treaty of Ghent. This was ratified by the Americans on February 16, 1815. The British government approved the treaty within a few hours of receiving it and the Prince Regent signed it on December 27, 1814.
Unaware of the peace, Andrew Jackson's forces moved to New Orleans, Louisiana in late 1814 to defend against a large-scale British invasion. Jackson defeated the British at the Battle of New Orleans on January 8, 1815. At the end of the day, the British had a little over 2,000 casualties: 278 dead (including three senior generals Pakenham, Gibbs, and Major General Keane), 1186 wounded, and 484 captured or missing. The Americans had 71 casualties: 13 dead, 39 wounded, and 19 missing. It was hailed as a great victory for the U.S., making Jackson a national hero and eventually propelling him to the presidency.
The British gave up on New Orleans but moved to attack the Gulf Coast port of Mobile, Alabama, which the Americans had seized from the Spanish in 1813. In one of the last military actions of the war, 1,000 British troops won the Battle of Fort Bowyer on February 12, 1815. When news of peace arrived the next day, they abandoned the fort and sailed home. In May 1815, a band of British-allied Sauk, unaware that the war had ended months ago, attacked a small band of U.S. soldiers northwest of St. Louis. Intermittent fighting, primarily with the Sauk, continued in the Missouri Territory well into 1817, although it is unknown if the Sauk were acting on their own or on behalf of Great Britain. Several uncontacted isolated warships continued fighting well into 1815 and were the last American forces to take offensive action against the British.
British losses in the war were about 1,600 killed in action and 3,679 wounded; 3,321 British died from disease. American losses were 2,260 killed in action and 4,505 wounded. While the number of Americans who died from disease is not known, it is estimated that 17,000 perished. These figures do not include deaths among American or Canadian militia forces or losses among native tribes.
In addition, at least 3,000 American slaves escaped to the British because of their offer of freedom, the same as they had made in the American Revolution. Many other slaves simply escaped in the chaos of war and achieved their freedom on their own. The British settled some of the newly freed slaves in Nova Scotia. Four hundred freedmen were settled in New Brunswick. The Americans protested that Britain's failure to return the slaves violated the Treaty of Ghent. After arbitration by the Czar of Russia the British paid $1,204,960, in damages to Washington, which reimbursed the slaveowners.
The war was ended by the Treaty of Ghent, signed on December 24, 1814 and taking effect February 18, 1815. The terms stated that fighting between the United States and Britain would cease, all conquered territory was to be returned to the prewar claimant, the Americans were to gain fishing rights in the Gulf of Saint Lawrence, and that the United States and Britain agreed to recognise the prewar boundary between Canada and the United States.
The Treaty of Ghent, which was promptly ratified by the Senate in 1815, ignored the grievances that led to war. American complaints of Indian raids, impressment and blockades had ended when Britain's war with France (apparently) ended, and were not mentioned in the treaty. The treaty proved to be merely an expedient to end the fighting. Mobile and parts of western Florida remained permanently in American possession, despite objections by Spain. Thus, the war ended with no significant territorial losses for either side.
Neither side lost territory in the war, nor did the treaty that ended it address the original points of contention—and yet it changed much between the United States of America and Britain.
The Treaty of Ghent established the status quo ante bellum; that is, there were no territorial changes made by either side. The issue of impressment was made moot when the Royal Navy stopped impressment after the defeat of Napoleon. Except for occasional border disputes and the circumstances of the American Civil War, relations between the United States and Britain remained generally peaceful for the rest of the nineteenth century, and the two countries became close allies in the twentieth century.
Border adjustments between the United States and British North America were made in the Treaty of 1818. A border dispute along the Maine-New Brunswick border was settled by the 1842 Webster-Ashburton Treaty after the bloodless Aroostook War, and the border in the Oregon Territory was settled by splitting the disputed area in half by the 1846 Oregon Treaty. Yet, according to Winston Churchill, "The lessons of the war were taken to heart. Anti-American sentiment in Britain ran high for several years, but the United States was never again refused proper treatment as an independent power."
The U.S. ended the aboriginal threat on its western and southern borders. The nation also gained a psychological sense of complete independence as people celebrated their "second war of independence." Nationalism soared after the victory at the Battle of New Orleans. The opposition Federalist Party collapsed, and the Era of Good Feelings ensued. The U.S. did make one minor territorial gain during the war, though not at Britain's expense, when it captured Mobile, Alabama from Spain.
No longer questioning the need for a strong Navy, the United States built three new 74-gun ships of the line and two new 44-gun frigates shortly after the end of the war. (Another frigate had been destroyed to prevent it being captured on the stocks.) In 1816, the U.S. Congress passed into law an "Act for the gradual increase of the Navy" at a cost of $1,000,000 a year for eight years, authorizing nine ships of the line and 12 heavy frigates. The Captains and Commodores of the U.S. Navy became the heroes of their generation in the United States. Decorated plates and pitchers of Decatur, Hull, Bainbridge, Lawrence, Perry, and Macdonough were made in Staffordshire, England, and found a ready market in the United States. Three of the war heroes used their celebrity to win national office: Andrew Jackson (elected President in 1828 and 1832), Richard Mentor Johnson (elected Vice President in 1836), and William Henry Harrison (elected President in 1840).
New England states became increasingly frustrated over how the war was being conducted and how the conflict was affecting them. They complained that the United States government was not investing enough in the states' defences militarily and financially and that the states should have more control over their militia. The increased taxes, the British blockade, and the occupation of some of New England by enemy forces also agitated public opinion in the states. As a result, at the Hartford Convention (December 1814–January 1815) held in Connecticut, New England representatives asked New England to have its states' powers fully restored. Nevertheless, a common misconception propagated by newspapers of the time was that the New England representatives wanted to secede from the Union and make a separate peace with the British. This view is not supported by what happened at the Convention.
Slaveholders primarily in the South suffered considerable loss of property as tens of thousands of slaves escaped to British lines or ships for freedom, despite the difficulties. The planters' complacency about slave contentment was shocked by their seeing slaves who would risk so much to be free.
Today, American popular memory includes the British capture and destruction of the U.S. Presidential Mansion in August 1814, which necessitated its extensive renovation. From this event has arisen the tradition that the building's new white paint inspired a popular new nickname, the White House. However, the tale appears apocryphal; the name "White House" is first attested in 1811. Another memory is the successful American defence of Fort McHenry in September 1814, which inspired the lyrics of the U.S. national anthem, The Star-Spangled Banner.
The War of 1812 was seen by Loyalists in British North America (which formed the Dominion of Canada in 1867) as a victory, as they had successfully defended their borders from an American takeover. The outcome gave Empire-oriented Canadians confidence and, together with the postwar "militia myth" that the civilian militia had been primarily responsible rather than the British regulars, was used to stimulate a new sense of Canadian nationalism.
A long-term implication of the militia myth — which was false, but remained popular in the Canadian public at least until World War I — was that Canada did not need a regular professional army. The U.S. Army had done poorly, on the whole, in several attempts to invade Canada, and the Canadians had shown that they would fight bravely to defend their country. But the British did not doubt that the thinly populated territory would be vulnerable in a third war. "We cannot keep Canada if the Americans declare war against us again," Admiral Sir David Milne wrote to a correspondent in 1817.
The Battle of York demonstrated the vulnerability of Upper and Lower Canada. In the 1820s, work began on La Citadelle at Quebec City as a defence against the United States; the fort remains an operational base of the Canadian Forces. Additionally, work began on the Halifax citadel to defend the port against American attacks. This fort remained in operation through World War II.
In the 1830s, the Rideau Canal was built to provide a secure waterway from Montreal to Lake Ontario, avoiding the narrows of the St. Lawrence River, where ships could be vulnerable to American cannon fire. To defend the western end of the canal, the British also built Fort Henry at Kingston, which remained operational until 1891.
The Native Americans allied to Great Britain lost their cause. The British proposal to create a "neutral" Indian zone in the American West was rejected at the Ghent peace conference and never resurfaced. In the decade after 1815, many white Americans assumed that the British continued to conspire with their former native allies in an attempt to forestall U.S. hegemony in the Great Lakes region. Such perceptions were faulty. After the Treaty of Ghent, the natives became an undesirable burden to British policymakers who now looked to the United States for markets and raw materials. British agents in the field continued to meet regularly with their former native partners, but they did not supply arms or encouragement for Indian campaigns to stop U.S. expansionism in the Midwest. Abandoned by their powerful sponsor, Great Lakes-area natives ultimately migrated or reached accommodations with the American authorities and settlers. In the Southeast, Indian resistance had been crushed by General Andrew Jackson; as President (1829–37), Jackson systematically removed the major tribes to reservations west of the Mississippi.
Bermuda had been largely left to the defences of its own militia and privateers prior to U.S. independence, but the Royal Navy had begun buying up land and operating from there in 1795, as its location was a useful substitute for the lost U.S. ports. It originally was intended to be the winter headquarters of the North American Squadron, but the war saw it rise to a new prominence. As construction work progressed through the first half of the century, Bermuda became the permanent naval headquarters in Western waters, housing the Admiralty and serving as a base and dockyard. The military garrison was built up to protect the naval establishment, heavily fortifying the archipelago that came to be described as the "Gibraltar of the West." Defence infrastructure would remain the central leg of Bermuda's economy until after World War II.
The war was scarcely noticed then and is barely remembered in Britain because it was overshadowed by the far-larger conflict against the French Empire under Napoleon. Britain's goals of impressing seamen and blocking trade with France had been achieved and were no longer needed. The Royal Navy was the world's dominant nautical power in the early 19th century (and would remain so for another century). During the War of 1812, it had used its overwhelming strength to cripple American maritime trade and launch raids on the American coast. The United States Navy had only 14 frigates and smaller ships to crew at the start of the war, while Britain maintained 85 ships in North American waters alone. Yet—as the Royal Navy was acutely aware—the U.S. Navy had won most of the single-ship duels during the war. The causes of the losses were many, but among those were the heavier broadside of the American 44-gun frigates and the fact that the large crew on each U.S. Navy ship was hand-picked from among the approximately 55,000 unemployed merchant seamen in American harbors. The crews of the British fleet, which numbered some 140,000 men, were rounded out with impressed ordinary seamen and landsmen. In an order to his ships, Admiral John Borlase Warren ordered that less attention be paid to spit-and-polish and more to gunnery practice. It is notable that the well-trained gunnery of HMS Shannon allowed her victory over the untrained crew of the USS Chesapeake.
The War of 1812 was fought between the British Empire and the United States from 1812 to 1814 on land in North America and at sea. More than half of the British forces were made up of Canadian militia (volunteers) because British soldiers had to fight Napoleon in Europe. The British defeated the attacking American forces. In the end, the war created a greater sense of nationalism in both Canada and the United States.
Some people in the United States wanted to maintain their independence. Some also wanted the United States to take over Canada. The war began when the United States started to attack the Canadian provinces in 1812 and 1813, but the borders were successfully defended by the British. In 1813, British and American ships fought in Lake Erie in a battle known as the Battle of Lake Erie. Americans under Oliver Hazzard Perry won.
In 1814, British soldiers landed in the United States. They burned Washington, D.C. to the ground and also attacked Baltimore. It was during this battle that a poem was written by an American soldier, Francis Scott Key. The poem was used as the new national anthem for the United States: "The Star Spangled Banner." The final battle of the war took place in January of 1815. The British attacked New Orleans and were beaten by the Americans and General Andrew Jackson. The battle took place after the peace treaty had been signed.
The War of 1812 ended in 1815 even though the signing of the Treaty of Ghent, which was supposed to end the war, happened on Dec 24, 1814, in Belgium. Both sides thought they had won, but no great changes took place. News of the peace treaty did not reach the US until after the battle in New Orleans in January 1815. | http://www.thefullwiki.org/War_of_1812 | 13 |
42 | The Townshend Acts were a series of acts passed beginning in 1767 by the Parliament of Great Britain relating to the British colonies in North America. The acts are named after Charles Townshend, the Chancellor of the Exchequer, who proposed the program. Historians vary slightly in which acts they include under the heading "Townshend Acts", but five laws are often mentioned: the Revenue Act of 1767, the Indemnity Act, the Commissioners of Customs Act, the Vice Admiralty Court Act, and the New York Restraining Act. The purpose of the Townshend Acts was to raise revenue in the colonies to pay the salaries of governors and judges so that they would be independent of colonial rule, to create a more effective means of enforcing compliance with trade regulations, to punish the province of New York for failing to comply with the 1765 Quartering Act, and to establish the precedent that the British Parliament had the right to tax the colonies. The Townshend Acts were met with resistance in the colonies, prompting the occupation of Boston by British troops in 1768, which eventually resulted in the Boston Massacre of 1770.
As a result of the massacre in Boston, Parliament began to consider a motion to partially repeal the Townshend duties. Most of the new taxes were repealed, but the tax on tea was retained. The British government continued in its attempt to tax the colonists without their consent and the Boston Tea Party and the American Revolution followed.
Following the Seven Years War 1756–1763, the British Empire was deep in debt. To help pay some of the costs of the newly expanded empire, the Parliament of Great Britain/British Parliament decided to levy new taxes on the colonies of British America. Previously, through the Trade and Navigation Acts, Parliament had used taxation to regulate the trade of the empire. However, with the Sugar Act of 1764, Parliament sought for the first time to tax the colonies for the specific purpose of raising revenue. American colonists initially objected to the Sugar Act for economic reasons, but before long they recognized that there were constitutional issues involved.
It was argued that the Bill of Rights 1688 protected British subjects from being taxed without the consent of a truly representative Parliament. Because the colonies elected no members of the British Parliament, many colonists viewed Parliament's attempt to tax them as a violation of the constitutional doctrine of taxation only by consent. Some British politicians countered this argument with the theory of "virtual representation", which maintained that the colonists were in fact represented in Parliament even though they elected no members. This issue, only briefly debated following the Sugar Act, became a major point of contention following Parliament's passage of the 1765 Stamp Act. The Stamp Act proved to be wildly unpopular in the colonies, contributing to its repeal the following year, along with the lack of substantial revenue being raised.
Implicit in the Stamp Act dispute was an issue more fundamental than taxation and representation: the question of the extent of Parliament's authority in the colonies. Parliament provided its answer to this question when it repealed the Stamp Act in 1766 by simultaneously passing the Declaratory Act, which proclaimed that Parliament could legislate for the colonies "in all cases whatsoever".
The first of the Townshend Acts, sometimes simply known as the Townshend Act, was the Revenue Act of 1767. This act represented the Chatham ministry's new approach for generating tax revenue in the American colonies after the repeal of the Stamp Act in 1766. The British government had gotten the impression that because the colonists had objected to the Stamp Act on the grounds that it was a direct (or "internal") tax, colonists would therefore accept indirect (or "external") taxes, such as taxes on imports. With this in mind, Charles Townshend, the Chancellor of the Exchequer, devised a plan that placed new duties on paper, paint, lead, glass, and tea that were imported into the colonies. These were items that were not produced in North America and that the colonists were only allowed to buy from Great Britain.
The British government's belief that the colonists would accept "external" taxes resulted from a misunderstanding of the colonial objection to the Stamp Act. The colonists' objection to "internal" taxes did not mean that they would accept "external" taxes; the colonial position was that any tax laid by Parliament for the purpose of raising revenue was unconstitutional. "Townshend's mistaken belief that Americans regarded internal taxes as unconstitutional and external taxes constitutional", wrote historian John Phillip Reid, "was of vital importance in the history of events leading to the Revolution." The Townshend Revenue Act received the royal assent on 29 June 1767. There was little opposition expressed in Parliament at the time. "Never could a fateful measure have had a more quiet passage", wrote historian Peter Thomas.
The Revenue Act was passed in conjunction with the Indemnity Act of 1767, which was intended to make the tea of the British East India Company more competitive with smuggled Dutch tea. The Indemnity Act repealed taxes on tea imported to England, allowing it to be re-exported more cheaply to the colonies. This tax cut in England would be partially offset by the new Revenue Act taxes on tea in the colonies. The Revenue Act also reaffirmed the legality of writs of assistance, or general search warrants, which gave customs officials broad powers to search houses and businesses for smuggled goods.
The original stated purpose of the Townshend duties was to raise a revenue to help pay the cost of maintaining an army in North America. Townshend changed the purpose of the tax plan, however, and instead decided to use the revenue to pay the salaries of some colonial governors and judges. Previously, the colonial assemblies had paid these salaries, but Parliament hoped to take the "power of the purse" away from the colonies. According to historian John C. Miller, "Townshend ingeniously sought to take money from Americans by means of parliamentary taxation and to employ it against their liberties by making colonial governors and judges independent of the assemblies."
Some members of Parliament objected because Townshend's plan was expected to generate only ₤40,000 in yearly revenue, but he explained that once the precedent for taxing the colonists had been firmly established, the program could gradually be expanded until the colonies paid for themselves. According to historian Peter Thomas, Townshend's "aims were political rather than financial".
To better collect the new taxes, the Commissioners of Customs Act of 1767 established the American Board of Customs Commissioners, which was modeled on the British Board of Customs. The American Customs Board was created because of the difficulties the British Board faced in enforcing trade regulations in the distant colonies. Five commissioners were appointed to the board, which was headquartered in Boston. The American Customs Board would generate considerable hostility in the colonies towards the British government. According to historian Oliver M. Dickerson, "The actual separation of the continental colonies from the rest of the Empire dates from the creation of this independent administrative board."
Another measure to aid in enforcement of the trade laws was the Vice Admiralty Court Act of 1768. Although often included in discussions of the Townshend Acts, this act was initiated by the Cabinet when Townshend was not present, and was not passed until after his death. Before this act, there was just one vice admiralty court in North America, located in Halifax, Nova Scotia. Established in 1764, this court proved to be too remote to serve all of the colonies, and so the 1768 Vice Admiralty Court Act created four district courts, which were located at Halifax, Boston, Philadelphia, and Charleston. One purpose of the vice admiralty courts, which did not have juries, was to help customs officials prosecute smugglers, since colonial juries were reluctant to convict persons for violating unpopular trade regulations.
Townshend also faced the problem of what to do about the New York Provincial Assembly, which had refused to comply with the 1765 Quartering Act because its members saw the act's financial provisions as levying an unconstitutional tax. The New York Restraining Act, which according to historian Robert Chaffin was "officially a part of the Townshend Acts", suspended the power of the Assembly until it complied with the Quartering Act. The Restraining Act never went into effect because, by the time it was passed, the New York Assembly had already appropriated money to cover the costs of the Quartering Act. The Assembly avoided conceding the right of Parliament to tax the colonies by making no reference to the Quartering Act when appropriating this money; they also passed a resolution stating that Parliament could not constitutionally suspend an elected legislature.
Townshend knew that his program would be controversial in the colonies, but he argued that, "The superiority of the mother country can at no time be better exerted than now." The Townshend Acts did not create an instant uproar like the Stamp Act had done two years earlier, but before long, opposition to the program had become widespread. Townshend did not live to see this reaction, having died suddenly on 4 September 1767.
The most influential colonial response to the Townshend Acts was a series of twelve essays by John Dickinson entitled "Letters from a Farmer in Pennsylvania", which began appearing in December 1767. Eloquently articulating ideas already widely accepted in the colonies, Dickinson argued that there was no difference between "internal" and "external" taxes, and that any taxes imposed on the colonies by Parliament for the sake of raising a revenue were unconstitutional. Dickinson warned colonists not to concede to the taxes just because the rates were low, since this would set a dangerous precedent.
Dickinson sent a copy of his "Letters" to James Otis of Massachusetts, informing Otis that "whenever the Cause of American Freedom is to be vindicated, I look towards the Province of Massachusetts Bay". The Massachusetts House of Representatives began a campaign against the Townshend Acts by first sending a petition to King George asking for the repeal of the Revenue Act, and then sending a letter to the other colonial assemblies, asking them to join the resistance movement. Upon receipt of the Massachusetts Circular Letter, other colonies also sent petitions to the king. Virginia and Pennsylvania also sent petitions to Parliament, but the other colonies did not, believing that it might have been interpreted as an admission of Parliament's sovereignty over them. Parliament refused to consider the petitions of Virginia and Pennsylvania.
In Great Britain, Lord Hillsborough, who had recently been appointed to the newly created office of Colonial Secretary, was alarmed by the actions of the Massachusetts House. In April 1768 he sent a letter to the colonial governors in America, instructing them to dissolve the colonial assemblies if they responded to the Massachusetts Circular Letter. He also sent a letter to Massachusetts Governor Francis Bernard, instructing him to have the Massachusetts House rescind the Circular Letter. By a vote of 92 to 17, the House refused to comply, and Bernard promptly dissolved the legislature.
Merchants in the colonies, some of them smugglers, organized economic boycotts to put pressure on their British counterparts to work for repeal of the Townshend Acts. Boston merchants organized the first non-importation agreement, which called for merchants to suspend importation of certain British goods effective 1 January 1769. Merchants in other colonial ports, including New York City and Philadelphia, eventually joined the boycott. In Virginia, the non-importation effort was organized by George Washington and George Mason. When the Virginia House of Burgesses passed a resolution stating that Parliament had no right to tax Virginians without their consent, Governor Lord Botetourt dissolved the assembly. The members met at Raleigh Tavern and adopted a boycott agreement known as the "Association".
The non-importation movement was not as effective as promoters had hoped. British exports to the colonies declined by 38 percent in 1769, but there were many merchants who did not participate in the boycott. The boycott movement began to fail by 1770, and came to an end in 1771.
Unrest in Boston
The newly created American Customs Board was seated in Boston, and so it was there that the Board concentrated on strictly enforcing the Townshend Acts. The acts were so unpopular in Boston that the Customs Board requested naval and military assistance. Commodore Samuel Hood complied by sending the fifty-gun warship HMS Romney, which arrived in Boston Harbor in May 1768.
On June 10, 1768, customs officials seized the Liberty, a sloop owned by leading Boston merchant John Hancock, on allegations that the ship had been involved in smuggling. Bostonians, already angry because the captain of the Romney had been impressing local sailors, began to riot. Customs officials fled to Castle William for protection. With John Adams serving as his lawyer, Hancock was prosecuted in a highly publicized trial by a vice-admiralty court, but the charges were eventually dropped.
Given the unstable state of affairs in Massachusetts, Hillsborough instructed Governor Bernard to try to find evidence of treason in Boston. Parliament had determined that the Treason Act 1543 was still in force, which would allow Bostonians to be transported to England to stand trial for treason. Bernard could find no one who was willing to provide reliable evidence, however, and so there were no treason trials. The possibility that American colonists might be arrested and sent to England for trial produced alarm and outrage in the colonies.
Even before the Liberty riot, Hillsborough had decided to send troops to Boston. On 8 June 1768, he instructed General Thomas Gage, Commander-in-Chief, North America, to send "such Force as You shall think necessary to Boston", although he conceded that this might lead to "consequences not easily foreseen". Hillsborough suggested that Gage might send one regiment to Boston, but the Liberty incident convinced officials that more than one regiment would be needed.
People in Massachusetts learned in September 1768 that troops were on the way. Samuel Adams organized an emergency, extralegal convention of towns and passed resolutions against the imminent occupation of Boston, but on 1 October 1768, the first of four regiments of the British Army began disembarking in Boston, and the Customs Commissioners returned to town. The "Journal of Occurrences", an anonymously written series of newspaper articles, chronicled clashes between civilians and soldiers during the military occupation of Boston, apparently with some exaggeration. Tensions rose after Christopher Seider, a Boston teenager, was killed by a customs employee on 22 February 1770. Although British soldiers were not involved in that incident, resentment against the occupation escalated in the days that followed, resulting in the killing of five civilians in the so-called Boston Massacre of 5 March 1770. After the incident, the troops were withdrawn to Castle William.
On the 5 of March 1770— the same day as the Boston Massacre—Lord North, the new Prime Minister, presented a motion in the House of Commons that called for partial repeal of the Townshend Revenue Act. Although some in Parliament advocated a complete repeal of the act, North disagreed, arguing that the tea duty should be retained to assert "the right of taxing the Americans". After debate, the Repeal Act received the Royal Assent on 12 April 1770.
Historian Robert Chaffin argued that little had actually changed:
It would be inaccurate to claim that a major part of the Townshend Acts had been repealed. The revenue-producing tea levy, the American Board of Customs and, most important, the principle of making governors and magistrates independent all remained. In fact, the modification of the Townshend Duties Act was scarcely any change at all.
The Townshend duty on tea was retained when the 1773 Tea Act was passed, which allowed the East India Company to ship tea directly to the colonies. The Boston Tea Party soon followed, which set the stage for the American Revolution.
- Dickerson (Navigation Acts, 195–95) for example, writes that there were four Townshend Acts, and does not mention the New York Restraining Act, which Chaffin says was "officially a part of the Townshend Acts" ("Townshend Acts", 128).
- Chaffin, "Townshend Acts", 126.
- Chaffin, "Townshend Acts", 143.
- Thomas, Townshend Duties, 10.
- Knollenberg, Growth, 21–25.
- The Revenue Act of 1767 was 7 Geo. III ch. 46; Knollenberg, Growth, 47; Labaree, Tea Party, 270n12. It is also known as the Townshend Revenue Act, the Townshend Duties Act, and the Tariff Act of 1767.
- Chaffin, "Townshend Acts", 143; Thomas, Duties Crisis, 9.
- Reid, Authority to Tax, 33–39.
- Thomas, Duties Crisis, 9; Labaree, Tea Party, 19–20.
- Chaffin, "Townshend Acts", 127.
- Reid, Authority to Tax, 33.
- Thomas, Duties Crisis, 31.
- The Indemnity Act was 7 Geo. III ch. 56; Labaree, Tea Party, 269n20. It is also known as the Tea Act of 1767; Jensen, Founding, 435.
- Dickerson, Navigation Acts, 196.
- Labaree, Tea Party, 21.
- Reid, Rebellious Spirit, 29, 135n24.
- Thomas, Duties Crisis, 22–23.
- Thomas, Duties Crisis, 23–25.
- Thomas, Duties Crisis, 260.
- Miller, Origins, 255.
- Chaffin, "Townshend Acts", 128; Thomas, Duties Crisis, 30.
- Thomas, Duties Crisis, 30.
- 7 Geo. III ch. 41; Knollenberg, Growth, 47.
- Thomas, Duties Crisis, 33; Chaffin, "Townshend Acts", 129.
- Chaffin, "Townshend Acts", 130.
- Dickerson, Navigation Acts, 199.
- 8 Geo. III ch. 22.
- Thomas, Duties Crisis, 34–35.
- Chaffin, "Townshend Acts", 134.
- 7 Geo. III ch. 59. Also known as the New York Suspending Act; Knollenberg, Growth, 296.
- Chaffin, "Townshend Acts", 128.
- Chaffin, "Townshend Acts", 134–35.
- Chaffin, "Townshend Acts", 131.
- Knollenberg, Growth, 48; Thomas, Duties Crisis, 76.
- Thomas, Duties Crisis, 36.
- Chaffin, "Townshend Acts", 132.
- Knollenberg, Growth, 50.
- Knollenberg, Growth, 52–53.
- Knollenberg, Growth, 54. Dickinson's letter to Otis was dated December 5, 1767.
- Knollenberg, Growth, 54.
- Thomas, Duties Crisis, 84; Knollenberg, Growth, 54–57.
- Thomas, Duties Crisis, 85, 111–12.
- Thomas, Duties Crisis, 112.
- Thomas, Duties Crisis, 81; Knollenberg, Growth, 56.
- Knollenberg, Growth, 57–58.
- Knollenberg, Growth, 59.
- Thomas, Duties Crisis, 157.
- Chaffin, "Townshend Acts", 138.
- Knollenberg, Growth, 61–63.
- Knollenberg, Growth, 63.
- "Notorious Smuggler", 236–46; Knollenberg, Growth, 63–65.
- Thomas, Duties Crisis, 109.
- Jensen, Founding, 296–97.
- Knollenberg, Growth, 69.
- Thomas, Duties Crisis, 82; Knollenberg, Growth, 75; Jensen, Founding, 290.
- Reid, Rebellious Spirit, 125.
- Thomas, Duties Crisis, 92.
- Knollenberg, Growth, 76.
- Knollenberg, Growth, 76–77.
- Knollenberg, Growth, 77–78.
- Knollenberg, Growth, 78–79.
- Knollenberg, Growth, 81.
- Knollenberg, Growth, 71.
- 10 Geo. III c. 17; Labaree, Tea Party, 276n17.
- Knollenberg, Growth, 72.
- Chaffin, "Townshend Acts", 140.
- Chaffin, Robert J. "The Townshend Acts crisis, 1767–1770". The Blackwell Encyclopedia of the American Revolution. Jack P. Greene, and J.R. Pole, eds. Malden, Massachusetts: Blackwell, 1991; reprint 1999. ISBN 1-55786-547-7.
- Dickerson, Oliver M. The Navigation Acts and the American Revolution. Philadelphia: University of Pennsylvania Press, 1951.
- Knollenberg, Bernhard. Growth of the American Revolution, 1766–1775. New York: Free Press, 1975. ISBN 0-02-917110-5.
- Labaree, Benjamin Woods. The Boston Tea Party. Originally published 1964. Boston: Northeastern University Press, 1979. ISBN 0-930350-05-7.
- Jensen, Merrill. The Founding of a Nation: A History of the American Revolution, 1763–1776. New York: Oxford University Press, 1968.
- Miller, John C. Origins of the American Revolution. Stanford University Press, 1959.
- Reid, John Phillip. In a Rebellious Spirit: The Argument of Facts, the Liberty Riot, and the Coming of the American Revolution. University Park: Pennsylvania State University Press, 1979. ISBN 0-271-00202-6.
- Reid, John Phillip. Constitutional History of the American Revolution, II: The Authority to Tax. Madison: University of Wisconsin Press, 1987. ISBN 0-299-11290-X.
- Thomas, Peter D. G. The Townshend Duties Crisis: The Second Phase of the American Revolution, 1767–1773. Oxford: Oxford University Press, 1987. ISBN 0-19-822967-4.
- Barrow, Thomas C. Trade and Empire: The British Customs Service in Colonial America, 1660–1775. Harvard University Press, 1967.
- Breen, T. H. The Marketplace of Revolution: How Consumer Politics Shaped American Independence. Oxford University Press, 2005. ISBN 0-19-518131-X; ISBN 978-0-19-518131-9.
- Knight, Carol Lynn H. The American Colonial Press and the Townshend Crisis, 1766–1770: A Study in Political Imagery. Lewiston: E. Mellen Press, 1990.
- Ubbelohde, Carl. The Vice-Admiralty Courts and the American Revolution. Chapel Hill: University of North Carolina Press, 1960.
- Text of the Townshend Revenue Act
- Article on the Townshend Acts, with some period documents, from the Massachusetts Historical Society
- Documents on the Townshend Acts and Period 1767–1768 | http://en.wikipedia.org/wiki/Townshend_Acts | 13 |