Sentence
stringlengths 102
4.09k
| video_title
stringlengths 27
104
|
---|---|
And then I'm going to have, I'll look at that data point and that expected, and I would get 15 minus 24 squared over expected, over 24. I'm running out of colors. And then we would look at that, those two numbers, and we would say plus 25 minus 16 squared divided by expected. And then we would get, we would look at these two, plus 15 minus 12 squared over expected, over 12. And then last but not least, let me find a color I haven't used, we would look at that and that, and we would say plus five minus eight squared over expected, over eight. Now once you get that value for the chi-square statistic, the next question is what are the degrees of freedom? Now a simple rule of thumb is to just look at your data and think about the number of rows and the number of columns. | Introduction to the chi-square test for homogeneity AP Statistics Khan Academy.mp3 |
And then we would get, we would look at these two, plus 15 minus 12 squared over expected, over 12. And then last but not least, let me find a color I haven't used, we would look at that and that, and we would say plus five minus eight squared over expected, over eight. Now once you get that value for the chi-square statistic, the next question is what are the degrees of freedom? Now a simple rule of thumb is to just look at your data and think about the number of rows and the number of columns. And we have three rows and two columns. And so your degrees of freedom are going to be the number of rows minus one, three minus one, times the number of columns minus one, two minus one. And so this is going to be equal to two times one, which is equal to two. | Introduction to the chi-square test for homogeneity AP Statistics Khan Academy.mp3 |
Now a simple rule of thumb is to just look at your data and think about the number of rows and the number of columns. And we have three rows and two columns. And so your degrees of freedom are going to be the number of rows minus one, three minus one, times the number of columns minus one, two minus one. And so this is going to be equal to two times one, which is equal to two. Now the reason why that makes intuitive sense is think about it. If you knew two of these data points, and if you knew all of the totals, then you could figure out the other data points. If you knew these two data points, you could figure out that. | Introduction to the chi-square test for homogeneity AP Statistics Khan Academy.mp3 |
And so this is going to be equal to two times one, which is equal to two. Now the reason why that makes intuitive sense is think about it. If you knew two of these data points, and if you knew all of the totals, then you could figure out the other data points. If you knew these two data points, you could figure out that. If you knew this data point, you knew the total. You could figure out that. If you knew this data point and you knew the total, you could figure out that. | Introduction to the chi-square test for homogeneity AP Statistics Khan Academy.mp3 |
If you knew these two data points, you could figure out that. If you knew this data point, you knew the total. You could figure out that. If you knew this data point and you knew the total, you could figure out that. And if you figured out that and that, then you could figure out this right over here. And so that's why this rule of thumb works. The number of rows minus one times the number of columns minus one gives you your degrees of freedom. | Introduction to the chi-square test for homogeneity AP Statistics Khan Academy.mp3 |
If you knew this data point and you knew the total, you could figure out that. And if you figured out that and that, then you could figure out this right over here. And so that's why this rule of thumb works. The number of rows minus one times the number of columns minus one gives you your degrees of freedom. Now, given this chi-squared statistic that I haven't calculated, but you could type this into a calculator and figure it out, and this degrees of freedom, we could then figure out the p-value. We could figure out the probability of getting a chi-squared statistic this extreme or more extreme. And if this is less than our significance level, which we should have set ahead of time, then we would reject the null hypothesis and it would suggest the alternative. | Introduction to the chi-square test for homogeneity AP Statistics Khan Academy.mp3 |
For warmup, Jeremiah likes to shoot three-point shots until he successfully makes one. Alright, this is the telltale signs of a geometric random variable. How many trials do I have to take until I get a success? Let m be the number of shots it takes Jeremiah to successfully make his first three-point shot. Okay, so they're defining the random variable here, the number of shots it takes, the number of trials it takes until we get a successful three-point shot. Assume that the results of each shot are independent. Alright, the probability that he makes a given shot is not dependent on whether he made or missed the previous shots. | Probability for a geometric random variable Random variables AP Statistics Khan Academy.mp3 |
Let m be the number of shots it takes Jeremiah to successfully make his first three-point shot. Okay, so they're defining the random variable here, the number of shots it takes, the number of trials it takes until we get a successful three-point shot. Assume that the results of each shot are independent. Alright, the probability that he makes a given shot is not dependent on whether he made or missed the previous shots. Find the probability that Jeremiah's first successful shot occurs on his third attempt. So like always, pause this video and see if you can have a go at it. Alright, now let's work through this together. | Probability for a geometric random variable Random variables AP Statistics Khan Academy.mp3 |
Alright, the probability that he makes a given shot is not dependent on whether he made or missed the previous shots. Find the probability that Jeremiah's first successful shot occurs on his third attempt. So like always, pause this video and see if you can have a go at it. Alright, now let's work through this together. So we wanna find the probability that, so m is the number of shots it takes until Jeremiah makes his first successful one. And so what they're really asking is find the probability that m is equal to three, that his first successful shot occurs on his third attempt. So m is equal to three. | Probability for a geometric random variable Random variables AP Statistics Khan Academy.mp3 |
Alright, now let's work through this together. So we wanna find the probability that, so m is the number of shots it takes until Jeremiah makes his first successful one. And so what they're really asking is find the probability that m is equal to three, that his first successful shot occurs on his third attempt. So m is equal to three. So that the number of shots it takes Jeremiah to make his first successful shot is three. So how do we do this? Well, what's just the probability of that happening? | Probability for a geometric random variable Random variables AP Statistics Khan Academy.mp3 |
So m is equal to three. So that the number of shots it takes Jeremiah to make his first successful shot is three. So how do we do this? Well, what's just the probability of that happening? Well, that means he has to miss his first two shots and then make his third shot. So what's the probability of him missing his first shot? Well, if he has a 1 4th chance of making his shots, he has a 3 4th probability of missing his shots. | Probability for a geometric random variable Random variables AP Statistics Khan Academy.mp3 |
Well, what's just the probability of that happening? Well, that means he has to miss his first two shots and then make his third shot. So what's the probability of him missing his first shot? Well, if he has a 1 4th chance of making his shots, he has a 3 4th probability of missing his shots. So this will be 3 4ths, so he misses the first shot, times he has to miss the second shot, and then he has to make his third shot. So there you have it, that's the probability. Miss, miss, make. | Probability for a geometric random variable Random variables AP Statistics Khan Academy.mp3 |
Well, if he has a 1 4th chance of making his shots, he has a 3 4th probability of missing his shots. So this will be 3 4ths, so he misses the first shot, times he has to miss the second shot, and then he has to make his third shot. So there you have it, that's the probability. Miss, miss, make. And so what is this going to be? This is equal to nine over 60 4ths. So there you have it. | Probability for a geometric random variable Random variables AP Statistics Khan Academy.mp3 |
Miss, miss, make. And so what is this going to be? This is equal to nine over 60 4ths. So there you have it. If you wanted to have this as a decimal, we could get a calculator out real fast. So this is nine, whoops, nine divided by 64 is equal to roughly 0.14, approximately 0.14. Or another way to think about it is, a roughly 14% chance or 14% probability that it takes him, that his first successful shot occurs in his third attempt. | Probability for a geometric random variable Random variables AP Statistics Khan Academy.mp3 |
This is just going to be a ton of algebraic manipulation, but I'll try to color-code it well so we don't get lost in the math. Let me just rewrite this expression over here. This whole video is just going to be rewriting this over and over again, just simplifying it a bit with algebra. This first term right over here, y1 minus mx1 plus b squared, that's going to be, and we could write this as all going to be the squared error of the line. This first term over here, I'll keep it in blue, is going to be, if we just expand it, y1 squared minus 2 times y1 times mx1 plus b plus mx1 plus b squared. All I did is I just squared this binomial right here. You can imagine this, if this was a minus b, it would be a squared minus 2ab plus b squared. | Proof (part 1) minimizing squared error to regression line Khan Academy.mp3 |
This first term right over here, y1 minus mx1 plus b squared, that's going to be, and we could write this as all going to be the squared error of the line. This first term over here, I'll keep it in blue, is going to be, if we just expand it, y1 squared minus 2 times y1 times mx1 plus b plus mx1 plus b squared. All I did is I just squared this binomial right here. You can imagine this, if this was a minus b, it would be a squared minus 2ab plus b squared. That's all I did. Now I'll just have to do that for each of the terms. Each term is only different by the x and the y coordinates right over here. | Proof (part 1) minimizing squared error to regression line Khan Academy.mp3 |
You can imagine this, if this was a minus b, it would be a squared minus 2ab plus b squared. That's all I did. Now I'll just have to do that for each of the terms. Each term is only different by the x and the y coordinates right over here. The next term, and I'll write it, I'll go down so that we can combine like terms. This term over here squared is going to be y2 squared minus 2 times y2 times mx2 plus b plus mx2 plus b squared. Same exact thing up here, except now it was with x2 and y2 as opposed to x1 and y1. | Proof (part 1) minimizing squared error to regression line Khan Academy.mp3 |
Each term is only different by the x and the y coordinates right over here. The next term, and I'll write it, I'll go down so that we can combine like terms. This term over here squared is going to be y2 squared minus 2 times y2 times mx2 plus b plus mx2 plus b squared. Same exact thing up here, except now it was with x2 and y2 as opposed to x1 and y1. Then we're just going to keep doing that n times. We're just going to keep doing it n times. We're going to do it for the third, x3, y3, keep going, keep going, all the way until we get to this nth term over here. | Proof (part 1) minimizing squared error to regression line Khan Academy.mp3 |
Same exact thing up here, except now it was with x2 and y2 as opposed to x1 and y1. Then we're just going to keep doing that n times. We're just going to keep doing it n times. We're going to do it for the third, x3, y3, keep going, keep going, all the way until we get to this nth term over here. This nth term over here when we square it is going to be yn squared minus 2yn times mxn plus b plus mxn plus b squared. Now the next thing I want to do is actually expand these out a little bit more. Let's expand these out a little more. | Proof (part 1) minimizing squared error to regression line Khan Academy.mp3 |
We're going to do it for the third, x3, y3, keep going, keep going, all the way until we get to this nth term over here. This nth term over here when we square it is going to be yn squared minus 2yn times mxn plus b plus mxn plus b squared. Now the next thing I want to do is actually expand these out a little bit more. Let's expand these out a little more. Let's actually scroll down. This whole expression, I'm just going to rewrite it, is the same thing as, and remember this is just a squared error of the line, so let me rewrite this top line over here. This top line over here is y1 squared. | Proof (part 1) minimizing squared error to regression line Khan Academy.mp3 |
Let's expand these out a little more. Let's actually scroll down. This whole expression, I'm just going to rewrite it, is the same thing as, and remember this is just a squared error of the line, so let me rewrite this top line over here. This top line over here is y1 squared. Then I'm going to distribute this 2y1. This is going to be minus 2y1mx1, that's just that times that, minus 2y1b and then plus, and now let's expand mx1 plus b squared. That's going to be m squared x1 squared plus 2 times mx1 times b plus b squared. | Proof (part 1) minimizing squared error to regression line Khan Academy.mp3 |
This top line over here is y1 squared. Then I'm going to distribute this 2y1. This is going to be minus 2y1mx1, that's just that times that, minus 2y1b and then plus, and now let's expand mx1 plus b squared. That's going to be m squared x1 squared plus 2 times mx1 times b plus b squared. All I did, if this was a plus b squared, this is a squared plus 2ab plus b squared. We're just going to do that for each of these terms, or each of these colors I guess you could say. Now let's move to the second term. | Proof (part 1) minimizing squared error to regression line Khan Academy.mp3 |
That's going to be m squared x1 squared plus 2 times mx1 times b plus b squared. All I did, if this was a plus b squared, this is a squared plus 2ab plus b squared. We're just going to do that for each of these terms, or each of these colors I guess you could say. Now let's move to the second term. Plus, it's going to be the same thing, but instead of y1s and x1s, it's going to be y2s and x2s. So it is y2 squared minus 2y2mx2 minus 2y2b plus m squared x2 squared plus 2 times mx2b plus b squared. We're going to keep doing this all the way until we get the nth term. | Proof (part 1) minimizing squared error to regression line Khan Academy.mp3 |
Now let's move to the second term. Plus, it's going to be the same thing, but instead of y1s and x1s, it's going to be y2s and x2s. So it is y2 squared minus 2y2mx2 minus 2y2b plus m squared x2 squared plus 2 times mx2b plus b squared. We're going to keep doing this all the way until we get the nth term. All the way until we get to the nth color we should say. So this is going to be yn squared minus 2ynmxn. You don't even have to think, you just have to substitute these with n's now. | Proof (part 1) minimizing squared error to regression line Khan Academy.mp3 |
We're going to keep doing this all the way until we get the nth term. All the way until we get to the nth color we should say. So this is going to be yn squared minus 2ynmxn. You don't even have to think, you just have to substitute these with n's now. We could actually look at this, but it's going to be the exact same thing. mxn minus 2ynb plus m squared xn squared plus 2mxnb plus b squared. So once again, this is just the squared error of that line with n points, between those n points and the line y equals mx plus b. | Proof (part 1) minimizing squared error to regression line Khan Academy.mp3 |
You don't even have to think, you just have to substitute these with n's now. We could actually look at this, but it's going to be the exact same thing. mxn minus 2ynb plus m squared xn squared plus 2mxnb plus b squared. So once again, this is just the squared error of that line with n points, between those n points and the line y equals mx plus b. So let's see if we can simplify this somehow. And to do that, what I'm going to do is I'm going to kind of try to add up a bunch of these terms here. So if I were to add up all of these terms right here, if I were to add up this column right over there, what do I get? | Proof (part 1) minimizing squared error to regression line Khan Academy.mp3 |
So once again, this is just the squared error of that line with n points, between those n points and the line y equals mx plus b. So let's see if we can simplify this somehow. And to do that, what I'm going to do is I'm going to kind of try to add up a bunch of these terms here. So if I were to add up all of these terms right here, if I were to add up this column right over there, what do I get? Well it's going to be y1 squared plus y2 squared plus y all the way to yn squared. That's those terms right over there. So I'm going to have that. | Proof (part 1) minimizing squared error to regression line Khan Academy.mp3 |
So if I were to add up all of these terms right here, if I were to add up this column right over there, what do I get? Well it's going to be y1 squared plus y2 squared plus y all the way to yn squared. That's those terms right over there. So I'm going to have that. And then I'm going to have minus, you have this common 2m amongst all of these terms over here. So let me write that down. 2m here, 2m here, 2m here. | Proof (part 1) minimizing squared error to regression line Khan Academy.mp3 |
So I'm going to have that. And then I'm going to have minus, you have this common 2m amongst all of these terms over here. So let me write that down. 2m here, 2m here, 2m here. So then you're going to have, let me put parentheses around here. So you have these terms all added up. Then you have minus 2m times all of these terms. | Proof (part 1) minimizing squared error to regression line Khan Academy.mp3 |
2m here, 2m here, 2m here. So then you're going to have, let me put parentheses around here. So you have these terms all added up. Then you have minus 2m times all of these terms. So you have, actually let me color code it just so you see what we're doing. I want to be very careful with this math so that nothing seems too confusing. Although this is really just algebraic manipulation. | Proof (part 1) minimizing squared error to regression line Khan Academy.mp3 |
Then you have minus 2m times all of these terms. So you have, actually let me color code it just so you see what we're doing. I want to be very careful with this math so that nothing seems too confusing. Although this is really just algebraic manipulation. So if I add all of these up, I get y1 squared plus y2 squared all the way to yn squared. I'll put some parentheses around that. And then to that, we have these common terms. | Proof (part 1) minimizing squared error to regression line Khan Academy.mp3 |
Although this is really just algebraic manipulation. So if I add all of these up, I get y1 squared plus y2 squared all the way to yn squared. I'll put some parentheses around that. And then to that, we have these common terms. We have this minus 2m, minus 2m, minus 2m. So we can distribute those out. And so this actually, I should actually write it like this. | Proof (part 1) minimizing squared error to regression line Khan Academy.mp3 |
And then to that, we have these common terms. We have this minus 2m, minus 2m, minus 2m. So we can distribute those out. And so this actually, I should actually write it like this. So we have a minus 2m times, once we distribute it out, here we're just going to be left with a y1 x1. Maybe I could call it an x1 y1. x1 y1, that's that over there with the 2m factored out. | Proof (part 1) minimizing squared error to regression line Khan Academy.mp3 |
And so this actually, I should actually write it like this. So we have a minus 2m times, once we distribute it out, here we're just going to be left with a y1 x1. Maybe I could call it an x1 y1. x1 y1, that's that over there with the 2m factored out. Plus x2, let me do that in another color. I want to make this easy to read. Plus x2 y2 plus xn yn. | Proof (part 1) minimizing squared error to regression line Khan Academy.mp3 |
x1 y1, that's that over there with the 2m factored out. Plus x2, let me do that in another color. I want to make this easy to read. Plus x2 y2 plus xn yn. Plus x, and well we're going to keep adding up. We're going to do this n times, all the way to plus xn yn. This last term over here, yn xn, same thing. | Proof (part 1) minimizing squared error to regression line Khan Academy.mp3 |
Plus x2 y2 plus xn yn. Plus x, and well we're going to keep adding up. We're going to do this n times, all the way to plus xn yn. This last term over here, yn xn, same thing. So that's the sum. So this stuff over here, let me just add a new color. The sum of all of this stuff right over here is the same thing as this term right over here. | Proof (part 1) minimizing squared error to regression line Khan Academy.mp3 |
This last term over here, yn xn, same thing. So that's the sum. So this stuff over here, let me just add a new color. The sum of all of this stuff right over here is the same thing as this term right over here. And then we have to sum this right over here. And you see again, we can factor out. We can factor out here a minus 2b out of all of these terms. | Proof (part 1) minimizing squared error to regression line Khan Academy.mp3 |
The sum of all of this stuff right over here is the same thing as this term right over here. And then we have to sum this right over here. And you see again, we can factor out. We can factor out here a minus 2b out of all of these terms. So we have minus 2b times y1 plus y2 plus all the way to yn. So this business, so these terms right over here, these terms right over here when you add them up, give you these terms or this term right over there. And let's just keep going. | Proof (part 1) minimizing squared error to regression line Khan Academy.mp3 |
We can factor out here a minus 2b out of all of these terms. So we have minus 2b times y1 plus y2 plus all the way to yn. So this business, so these terms right over here, these terms right over here when you add them up, give you these terms or this term right over there. And let's just keep going. And then in the next video, we're probably going to run out of time in this one. In the next video I'll simplify this more and I'll actually clean up the algebra a good bit. So then the next term, what is this going to be? | Proof (part 1) minimizing squared error to regression line Khan Academy.mp3 |
And let's just keep going. And then in the next video, we're probably going to run out of time in this one. In the next video I'll simplify this more and I'll actually clean up the algebra a good bit. So then the next term, what is this going to be? Same drill. We can factor out an m squared. So we have m squared times x1 squared plus x2 squared plus all the way. | Proof (part 1) minimizing squared error to regression line Khan Academy.mp3 |
So then the next term, what is this going to be? Same drill. We can factor out an m squared. So we have m squared times x1 squared plus x2 squared plus all the way. Actually, I want to color code them. I forgot to color code these over here. Plus x2 squared plus all the way to xn squared. | Proof (part 1) minimizing squared error to regression line Khan Academy.mp3 |
So we have m squared times x1 squared plus x2 squared plus all the way. Actually, I want to color code them. I forgot to color code these over here. Plus x2 squared plus all the way to xn squared. Let me color code these. This was a yn squared and this over here was a y2 squared. So this is exactly this. | Proof (part 1) minimizing squared error to regression line Khan Academy.mp3 |
Plus x2 squared plus all the way to xn squared. Let me color code these. This was a yn squared and this over here was a y2 squared. So this is exactly this. So we've written, so in this last step we just did, this thing over here is this thing right over here. And of course we have to add it, so I'll put a plus out front. We're almost done with this stage of the simplification. | Proof (part 1) minimizing squared error to regression line Khan Academy.mp3 |
So this is exactly this. So we've written, so in this last step we just did, this thing over here is this thing right over here. And of course we have to add it, so I'll put a plus out front. We're almost done with this stage of the simplification. So over here we have a common 2mb. So let's put a plus 2mb times, once again, x1 plus x2 plus all the way to xn. So this term right over here is the exact same thing as this term over here. | Proof (part 1) minimizing squared error to regression line Khan Academy.mp3 |
We're almost done with this stage of the simplification. So over here we have a common 2mb. So let's put a plus 2mb times, once again, x1 plus x2 plus all the way to xn. So this term right over here is the exact same thing as this term over here. And then finally we have a b squared in each of these. And how many of these b squares do we have? Well, we have n of these lines, right? | Proof (part 1) minimizing squared error to regression line Khan Academy.mp3 |
So this term right over here is the exact same thing as this term over here. And then finally we have a b squared in each of these. And how many of these b squares do we have? Well, we have n of these lines, right? This is the first line, second line, then a bunch, bunch, bunch, all the way to the nth line. So we have b squared added to itself n times. So this right over here is just b squared n times. | Proof (part 1) minimizing squared error to regression line Khan Academy.mp3 |
Well, we have n of these lines, right? This is the first line, second line, then a bunch, bunch, bunch, all the way to the nth line. So we have b squared added to itself n times. So this right over here is just b squared n times. So we'll just write that as plus n times b squared. Now it doesn't look like, let me remind ourselves what this is all about. This is all just algebraic manipulation of the squared error between those n points and the line y equals mx plus b. | Proof (part 1) minimizing squared error to regression line Khan Academy.mp3 |
In the last video, we were able to find the equation for the regression line for these four data points. What I want to do in this video is figure out the r squared for these data points. Figure out how good this line fits the data, or even better, figure out the percentage, which is really the same thing, of the variation of these data points, especially the variation in y, that can be explained by a variation in x. So to do that, I'm actually going to get a spreadsheet out. I actually have tried to do this with a calculator, and it's much harder. So hopefully this doesn't confuse you too much to use a spreadsheet. And I'm going to make a couple of columns here. | Calculating R-squared Regression Probability and Statistics Khan Academy.mp3 |
So to do that, I'm actually going to get a spreadsheet out. I actually have tried to do this with a calculator, and it's much harder. So hopefully this doesn't confuse you too much to use a spreadsheet. And I'm going to make a couple of columns here. And spreadsheets actually have functions that will do all of this automatically, but I really want to do it so that you could do it by hand if you had to. So I'm going to make a couple of columns here. This is going to be my x column. | Calculating R-squared Regression Probability and Statistics Khan Academy.mp3 |
And I'm going to make a couple of columns here. And spreadsheets actually have functions that will do all of this automatically, but I really want to do it so that you could do it by hand if you had to. So I'm going to make a couple of columns here. This is going to be my x column. This is going to be my y column. This is going to be the column, I'll call this y star. This will be the y value that our line predicts based on our x value. | Calculating R-squared Regression Probability and Statistics Khan Academy.mp3 |
This is going to be my x column. This is going to be my y column. This is going to be the column, I'll call this y star. This will be the y value that our line predicts based on our x value. This is going to be the error with the line. So it's going to be the difference, and we call it the squared error with line. Actually, let me just do the error with line. | Calculating R-squared Regression Probability and Statistics Khan Academy.mp3 |
This will be the y value that our line predicts based on our x value. This is going to be the error with the line. So it's going to be the difference, and we call it the squared error with line. Actually, let me just do the error with line. I'll do the squared error. I don't want this to take up too much space. Squared error with line. | Calculating R-squared Regression Probability and Statistics Khan Academy.mp3 |
Actually, let me just do the error with line. I'll do the squared error. I don't want this to take up too much space. Squared error with line. And then the next one I want to do the squared error. Actually, no, I already had the squared error. And then the next one I am going to have the squared variation for that y value, squared from the mean y. I think these columns by themselves will be enough for us to do everything. | Calculating R-squared Regression Probability and Statistics Khan Academy.mp3 |
Squared error with line. And then the next one I want to do the squared error. Actually, no, I already had the squared error. And then the next one I am going to have the squared variation for that y value, squared from the mean y. I think these columns by themselves will be enough for us to do everything. So let's first put all the data points in. So we had negative 2 comma negative 3. That was one data point. | Calculating R-squared Regression Probability and Statistics Khan Academy.mp3 |
And then the next one I am going to have the squared variation for that y value, squared from the mean y. I think these columns by themselves will be enough for us to do everything. So let's first put all the data points in. So we had negative 2 comma negative 3. That was one data point. Negative 1 comma negative 1. Then we had 1 comma 2. Then we have 4 comma 3. | Calculating R-squared Regression Probability and Statistics Khan Academy.mp3 |
That was one data point. Negative 1 comma negative 1. Then we had 1 comma 2. Then we have 4 comma 3. Now, what does our line predict? Well, our line says, look, you give me an x value, and I'm going to tell you what y value I'll predict. So when x is equal to negative 2, the y value on the line is going to be the slope. | Calculating R-squared Regression Probability and Statistics Khan Academy.mp3 |
Then we have 4 comma 3. Now, what does our line predict? Well, our line says, look, you give me an x value, and I'm going to tell you what y value I'll predict. So when x is equal to negative 2, the y value on the line is going to be the slope. So this is going to be equal to 41 divided by 42 times our x value, and I just selected that cell. And just a little bit of a primer on spreadsheets, I'm selecting the cell D2. I was able to just move my cursor over and select that. | Calculating R-squared Regression Probability and Statistics Khan Academy.mp3 |
So when x is equal to negative 2, the y value on the line is going to be the slope. So this is going to be equal to 41 divided by 42 times our x value, and I just selected that cell. And just a little bit of a primer on spreadsheets, I'm selecting the cell D2. I was able to just move my cursor over and select that. That tells me the x value minus 5 over 21. Minus 5 divided by 21. Just like that. | Calculating R-squared Regression Probability and Statistics Khan Academy.mp3 |
I was able to just move my cursor over and select that. That tells me the x value minus 5 over 21. Minus 5 divided by 21. Just like that. So just to be clear of what we're even doing, this y star here, I got negative 2.19. That tells us that this point right over here is negative 2.19 right over here. So when we figure out the error, we're going to figure out the distance between negative 3, that's our y value, between negative 3 and negative 2.19. | Calculating R-squared Regression Probability and Statistics Khan Academy.mp3 |
Just like that. So just to be clear of what we're even doing, this y star here, I got negative 2.19. That tells us that this point right over here is negative 2.19 right over here. So when we figure out the error, we're going to figure out the distance between negative 3, that's our y value, between negative 3 and negative 2.19. So let's do that. So the error is just going to be equal to our y value, that cell E2, minus the value that our line would predict. And we want the square. | Calculating R-squared Regression Probability and Statistics Khan Academy.mp3 |
So when we figure out the error, we're going to figure out the distance between negative 3, that's our y value, between negative 3 and negative 2.19. So let's do that. So the error is just going to be equal to our y value, that cell E2, minus the value that our line would predict. And we want the square. So just that value is the actual error, but we want to square it. So we want to square it just like that. So we will square it. | Calculating R-squared Regression Probability and Statistics Khan Academy.mp3 |
And we want the square. So just that value is the actual error, but we want to square it. So we want to square it just like that. So we will square it. And then, let me make sure I did the right thing. Yep. And then the next thing we want to do is the squared distance, so this is equal to the squared distance of our y value from the y's mean. | Calculating R-squared Regression Probability and Statistics Khan Academy.mp3 |
So we will square it. And then, let me make sure I did the right thing. Yep. And then the next thing we want to do is the squared distance, so this is equal to the squared distance of our y value from the y's mean. So what's the mean of the y's? Mean of the y's is 1 4th, so minus 0.25. It's the same thing as 1 4th. | Calculating R-squared Regression Probability and Statistics Khan Academy.mp3 |
And then the next thing we want to do is the squared distance, so this is equal to the squared distance of our y value from the y's mean. So what's the mean of the y's? Mean of the y's is 1 4th, so minus 0.25. It's the same thing as 1 4th. And we also want to square that. Now, this is what's fun about spreadsheets. I can apply those formulas to every row now. | Calculating R-squared Regression Probability and Statistics Khan Academy.mp3 |
It's the same thing as 1 4th. And we also want to square that. Now, this is what's fun about spreadsheets. I can apply those formulas to every row now. And notice what it did when I did that. Now all of a sudden, this is the y value that my line would predict, it's now using this x value and sticking it over here. It's now figuring out the squared distance from the line using what the line would predict and using the y value, this one. | Calculating R-squared Regression Probability and Statistics Khan Academy.mp3 |
I can apply those formulas to every row now. And notice what it did when I did that. Now all of a sudden, this is the y value that my line would predict, it's now using this x value and sticking it over here. It's now figuring out the squared distance from the line using what the line would predict and using the y value, this one. And then it does the same thing over here. It figures out the squared distance of this y value from the mean. So what is the total squared error with the line? | Calculating R-squared Regression Probability and Statistics Khan Academy.mp3 |
It's now figuring out the squared distance from the line using what the line would predict and using the y value, this one. And then it does the same thing over here. It figures out the squared distance of this y value from the mean. So what is the total squared error with the line? So let me just sum this up. The total squared error with the line is 2.73. And then the total variation from the mean, the squared distances from the mean of the y are 22.75. | Calculating R-squared Regression Probability and Statistics Khan Academy.mp3 |
So what is the total squared error with the line? So let me just sum this up. The total squared error with the line is 2.73. And then the total variation from the mean, the squared distances from the mean of the y are 22.75. So let me be very clear what this is. So let me write these numbers down. So our squared, I'll write it up here so we can keep looking at this actual graph. | Calculating R-squared Regression Probability and Statistics Khan Academy.mp3 |
And then the total variation from the mean, the squared distances from the mean of the y are 22.75. So let me be very clear what this is. So let me write these numbers down. So our squared, I'll write it up here so we can keep looking at this actual graph. So our squared error versus our line, our total squared error, we just computed to be 2.74. I rounded it a little bit. And what that is, is you take each of these data points, vertical distance to the line. | Calculating R-squared Regression Probability and Statistics Khan Academy.mp3 |
So our squared, I'll write it up here so we can keep looking at this actual graph. So our squared error versus our line, our total squared error, we just computed to be 2.74. I rounded it a little bit. And what that is, is you take each of these data points, vertical distance to the line. So this distance squared plus this distance squared plus this distance squared plus this distance squared. That's all we just calculated on Excel. And that total squared variation to the line is 2.74, our total squared error with the line. | Calculating R-squared Regression Probability and Statistics Khan Academy.mp3 |
And what that is, is you take each of these data points, vertical distance to the line. So this distance squared plus this distance squared plus this distance squared plus this distance squared. That's all we just calculated on Excel. And that total squared variation to the line is 2.74, our total squared error with the line. And then the other number we figured out was the total distance from the mean. So the mean here is y is equal to 1 4th. So that's going to be right over here. | Calculating R-squared Regression Probability and Statistics Khan Academy.mp3 |
And that total squared variation to the line is 2.74, our total squared error with the line. And then the other number we figured out was the total distance from the mean. So the mean here is y is equal to 1 4th. So that's going to be right over here. So y is equal to 1 4th is going to be right over, this is 1 half, so right over here. So this is our mean y, let me draw it a little bit neater than that, this is our mean y value. This is our mean y value, or the central tendency for our y values. | Calculating R-squared Regression Probability and Statistics Khan Academy.mp3 |
So that's going to be right over here. So y is equal to 1 4th is going to be right over, this is 1 half, so right over here. So this is our mean y, let me draw it a little bit neater than that, this is our mean y value. This is our mean y value, or the central tendency for our y values. And so what we calculated next was the total error, the squared error from the means of our y values. That's what we calculated over here. This is what we calculated over here in the spreadsheet. | Calculating R-squared Regression Probability and Statistics Khan Academy.mp3 |
This is our mean y value, or the central tendency for our y values. And so what we calculated next was the total error, the squared error from the means of our y values. That's what we calculated over here. This is what we calculated over here in the spreadsheet. You see it in the formula. It is this number, e2 minus 0.25, which is the mean of our y's, squared. That's exactly what we calculated. | Calculating R-squared Regression Probability and Statistics Khan Academy.mp3 |
This is what we calculated over here in the spreadsheet. You see it in the formula. It is this number, e2 minus 0.25, which is the mean of our y's, squared. That's exactly what we calculated. We calculated for each of the y values and then we summed them all up. It's 22.75. It is equal to 22.75. | Calculating R-squared Regression Probability and Statistics Khan Academy.mp3 |
That's exactly what we calculated. We calculated for each of the y values and then we summed them all up. It's 22.75. It is equal to 22.75. So if you wanted to know, so this is essentially the error that the line does not explain. This is the total error, this is the total variation of the numbers. So if you wanted to know the percentage of the total variation that is not explained by the line, you could take this number divided by this number. | Calculating R-squared Regression Probability and Statistics Khan Academy.mp3 |
It is equal to 22.75. So if you wanted to know, so this is essentially the error that the line does not explain. This is the total error, this is the total variation of the numbers. So if you wanted to know the percentage of the total variation that is not explained by the line, you could take this number divided by this number. So 2.74 over 22.75. This tells us the percentage of total variation not explained by the line or by the variation in x. By variation in x. | Calculating R-squared Regression Probability and Statistics Khan Academy.mp3 |
So if you wanted to know the percentage of the total variation that is not explained by the line, you could take this number divided by this number. So 2.74 over 22.75. This tells us the percentage of total variation not explained by the line or by the variation in x. By variation in x. And so what is this number going to be? I could just use Excel for this. So if I'm just going to divide this number divided by this number right over there, I get 0.12. | Calculating R-squared Regression Probability and Statistics Khan Academy.mp3 |
By variation in x. And so what is this number going to be? I could just use Excel for this. So if I'm just going to divide this number divided by this number right over there, I get 0.12. So this is equal to 0.12. So this is equal right over here. This is equal to 0.12. | Calculating R-squared Regression Probability and Statistics Khan Academy.mp3 |
So if I'm just going to divide this number divided by this number right over there, I get 0.12. So this is equal to 0.12. So this is equal right over here. This is equal to 0.12. Or another way to think about it is 12% of the total variation is not explained by the variation in x. The total squared distance between each of the points or their kind of spread, their variation, is not explained by the variation in x. So if you want the amount that is explained by the variance in x, you just subtract that from 1. | Calculating R-squared Regression Probability and Statistics Khan Academy.mp3 |
This is equal to 0.12. Or another way to think about it is 12% of the total variation is not explained by the variation in x. The total squared distance between each of the points or their kind of spread, their variation, is not explained by the variation in x. So if you want the amount that is explained by the variance in x, you just subtract that from 1. So let me write it right over here. So we have our r squared, which is the percent of the total variation that is explained by x is going to be 1 minus that 0.12 that we just calculated, which is going to be 0.88. So our r squared here is 0.88. | Calculating R-squared Regression Probability and Statistics Khan Academy.mp3 |
So if you want the amount that is explained by the variance in x, you just subtract that from 1. So let me write it right over here. So we have our r squared, which is the percent of the total variation that is explained by x is going to be 1 minus that 0.12 that we just calculated, which is going to be 0.88. So our r squared here is 0.88. It's very, very close to 1. The highest number it can be is 1. So what this tells us, or a way to interpret this, is 88% of the total variation of these y values is explained by the line or by the variation in x. | Calculating R-squared Regression Probability and Statistics Khan Academy.mp3 |
So our r squared here is 0.88. It's very, very close to 1. The highest number it can be is 1. So what this tells us, or a way to interpret this, is 88% of the total variation of these y values is explained by the line or by the variation in x. And you can see that. It looks like a pretty good fit. Each of these aren't too far. | Calculating R-squared Regression Probability and Statistics Khan Academy.mp3 |
So what this tells us, or a way to interpret this, is 88% of the total variation of these y values is explained by the line or by the variation in x. And you can see that. It looks like a pretty good fit. Each of these aren't too far. They're definitely much closer to this line. Each of these points are definitely much closer to the line than they are to the mean line. In fact, all of them are closer to our actual line than to the mean. | Calculating R-squared Regression Probability and Statistics Khan Academy.mp3 |
So a good place to start is just to define a random variable that essentially represents what you care about. So let's just say the number of cars that pass in some amount of time, let's say in an hour. And your goal is to figure out the probability distribution of this random variable. And then once you know the probability distribution, then you can figure out what's the probability that 100 cars pass in an hour, or the probability that no cars pass in an hour, and you'd be unstoppable. And just a little aside, just to move forward with this video, there's two assumptions we need to make, because we're going to study the Poisson distribution. In order to study it, there's two assumptions we have to make, that any hour at this point on the street is no different than any other hour. And we know that's probably false. | Poisson process 1 Probability and Statistics Khan Academy.mp3 |
And then once you know the probability distribution, then you can figure out what's the probability that 100 cars pass in an hour, or the probability that no cars pass in an hour, and you'd be unstoppable. And just a little aside, just to move forward with this video, there's two assumptions we need to make, because we're going to study the Poisson distribution. In order to study it, there's two assumptions we have to make, that any hour at this point on the street is no different than any other hour. And we know that's probably false. During rush hour in a real situation, you probably would have more cars than in another rush hour. And if you wanted to be more realistic, maybe we do it in a day. Because in a day, any period of time, actually, no, I shouldn't do a day. | Poisson process 1 Probability and Statistics Khan Academy.mp3 |
And we know that's probably false. During rush hour in a real situation, you probably would have more cars than in another rush hour. And if you wanted to be more realistic, maybe we do it in a day. Because in a day, any period of time, actually, no, I shouldn't do a day. We have to assume that every hour is completely just like any other hour. And actually, even within the hour, there's really no differentiation from one second to the other in terms of the probabilities that a car arrives. So that's a little bit of a simplifying assumption that might not truly apply to traffic, but I think we can make that assumption. | Poisson process 1 Probability and Statistics Khan Academy.mp3 |
Because in a day, any period of time, actually, no, I shouldn't do a day. We have to assume that every hour is completely just like any other hour. And actually, even within the hour, there's really no differentiation from one second to the other in terms of the probabilities that a car arrives. So that's a little bit of a simplifying assumption that might not truly apply to traffic, but I think we can make that assumption. And then the other assumption we need to make is that if a bunch of cars pass in one hour, that doesn't mean that fewer cars will pass in the next. That in no way does the number of cars that pass in one period affect or correlate or somehow influence the number of cars that pass in the next. That they're really independent. | Poisson process 1 Probability and Statistics Khan Academy.mp3 |
So that's a little bit of a simplifying assumption that might not truly apply to traffic, but I think we can make that assumption. And then the other assumption we need to make is that if a bunch of cars pass in one hour, that doesn't mean that fewer cars will pass in the next. That in no way does the number of cars that pass in one period affect or correlate or somehow influence the number of cars that pass in the next. That they're really independent. Given that, we can then at least try using the skills we have to model out some type of a distribution. The first thing you do, and I'd recommend doing this for any distribution, is maybe we can estimate the mean. Let's sit out on that curb and measure what this variable is over a bunch of hours and then average it up. | Poisson process 1 Probability and Statistics Khan Academy.mp3 |
That they're really independent. Given that, we can then at least try using the skills we have to model out some type of a distribution. The first thing you do, and I'd recommend doing this for any distribution, is maybe we can estimate the mean. Let's sit out on that curb and measure what this variable is over a bunch of hours and then average it up. And that's going to be a pretty good estimator for the actual mean of our population, or since it's a random variable, the expected value of this random variable. Let's say you do that and you get your best estimate of the expected value of this random variable is, I'll use the letter lambda. So this could be 9 cars per hour. | Poisson process 1 Probability and Statistics Khan Academy.mp3 |
Let's sit out on that curb and measure what this variable is over a bunch of hours and then average it up. And that's going to be a pretty good estimator for the actual mean of our population, or since it's a random variable, the expected value of this random variable. Let's say you do that and you get your best estimate of the expected value of this random variable is, I'll use the letter lambda. So this could be 9 cars per hour. You sat out there, it could be 9.3 cars per hour. You sat out there over hundreds of hours and you just counted the number of cars each hour and you averaged them all up. And you said on average there are 9.3 cars per hour and you feel that's a pretty good estimate. | Poisson process 1 Probability and Statistics Khan Academy.mp3 |
So this could be 9 cars per hour. You sat out there, it could be 9.3 cars per hour. You sat out there over hundreds of hours and you just counted the number of cars each hour and you averaged them all up. And you said on average there are 9.3 cars per hour and you feel that's a pretty good estimate. So that's what you have there. And let's see what we could do. We know the binomial distribution. | Poisson process 1 Probability and Statistics Khan Academy.mp3 |
And you said on average there are 9.3 cars per hour and you feel that's a pretty good estimate. So that's what you have there. And let's see what we could do. We know the binomial distribution. The binomial distribution tells us that the expected value of a random variable is equal to the number of trials that that random variable is kind of composed of. Before in the previous videos we were counting the number of heads in a coin toss. So this would be the number of coin tosses times the probability of success over each toss. | Poisson process 1 Probability and Statistics Khan Academy.mp3 |
We know the binomial distribution. The binomial distribution tells us that the expected value of a random variable is equal to the number of trials that that random variable is kind of composed of. Before in the previous videos we were counting the number of heads in a coin toss. So this would be the number of coin tosses times the probability of success over each toss. This is what we did with the binomial distribution. So maybe we can model our traffic situation something similar. This is the number of cars that pass in an hour. | Poisson process 1 Probability and Statistics Khan Academy.mp3 |
So this would be the number of coin tosses times the probability of success over each toss. This is what we did with the binomial distribution. So maybe we can model our traffic situation something similar. This is the number of cars that pass in an hour. So maybe we could say lambda cars per hour is equal to, I don't know, let's make each experiment or each toss of the coin equal to whether a car passes in a given minute. So there's 60 minutes per hour. And then so there would be 60 trials. | Poisson process 1 Probability and Statistics Khan Academy.mp3 |
This is the number of cars that pass in an hour. So maybe we could say lambda cars per hour is equal to, I don't know, let's make each experiment or each toss of the coin equal to whether a car passes in a given minute. So there's 60 minutes per hour. And then so there would be 60 trials. And then the probability that we have success in each of those trials, if we model this as a binomial distribution, would be lambda over 60 cars per minute. And this would be a probability. This would be n. And this would be the probability. | Poisson process 1 Probability and Statistics Khan Academy.mp3 |
And then so there would be 60 trials. And then the probability that we have success in each of those trials, if we model this as a binomial distribution, would be lambda over 60 cars per minute. And this would be a probability. This would be n. And this would be the probability. If we said that this is a binomial distribution. And this probably wouldn't be that bad of an approximation. If you actually then said, oh, this is a binomial distribution, so the probability that our random variable equals some given value k, the probability that exactly three cars pass in a given hour, it would then be equal to n. So n would be 60. | Poisson process 1 Probability and Statistics Khan Academy.mp3 |
This would be n. And this would be the probability. If we said that this is a binomial distribution. And this probably wouldn't be that bad of an approximation. If you actually then said, oh, this is a binomial distribution, so the probability that our random variable equals some given value k, the probability that exactly three cars pass in a given hour, it would then be equal to n. So n would be 60. Choose k, and three cars, times the probability of success, so the probability that a car passes in any minute, so it would be lambda over 60 to the number of successes we need, so to the kth power, times the probability of no success, or that no cars pass, to the n minus k. If we have k successes, we have to have 60 minus k failures. There are 60 minus k minutes where no car passed. And this actually wouldn't be that bad of an approximation, where you have 60 intervals and you say this is a binomial distribution, and you'd probably get reasonable results. | Poisson process 1 Probability and Statistics Khan Academy.mp3 |
If you actually then said, oh, this is a binomial distribution, so the probability that our random variable equals some given value k, the probability that exactly three cars pass in a given hour, it would then be equal to n. So n would be 60. Choose k, and three cars, times the probability of success, so the probability that a car passes in any minute, so it would be lambda over 60 to the number of successes we need, so to the kth power, times the probability of no success, or that no cars pass, to the n minus k. If we have k successes, we have to have 60 minus k failures. There are 60 minus k minutes where no car passed. And this actually wouldn't be that bad of an approximation, where you have 60 intervals and you say this is a binomial distribution, and you'd probably get reasonable results. But there's a core issue here. In this model, where we model it as a binomial distribution, what happens if more than one car passes in an hour? Or more than one car passes in a minute? | Poisson process 1 Probability and Statistics Khan Academy.mp3 |
And this actually wouldn't be that bad of an approximation, where you have 60 intervals and you say this is a binomial distribution, and you'd probably get reasonable results. But there's a core issue here. In this model, where we model it as a binomial distribution, what happens if more than one car passes in an hour? Or more than one car passes in a minute? The way we have it right now, we call it a success if one car passes in a minute. And if you're kind of counting, it counts as one success, even if five cars pass in that minute. And so you say, oh, OK, Sal. | Poisson process 1 Probability and Statistics Khan Academy.mp3 |