Sentence
stringlengths 102
4.09k
| video_title
stringlengths 27
104
|
---|---|
I overestimated the true variance. So this gives us a pretty good sense that n minus 1 is the right thing to do. Now, this is another way and another interesting way of visualizing it. In the horizontal axis right over here, we're comparing each plot as one of our samples. And how far to the right is, how much more is that sample mean than the true mean? And when we go to the left, it's how much less is the sample mean than the true mean? So for example, this sample right over here, it's all the way over to the right. | Another simulation giving evidence that (n-1) gives us an unbiased estimate of variance.mp3 |
In the horizontal axis right over here, we're comparing each plot as one of our samples. And how far to the right is, how much more is that sample mean than the true mean? And when we go to the left, it's how much less is the sample mean than the true mean? So for example, this sample right over here, it's all the way over to the right. The sample mean there was a lot more than the true mean. Sample mean here was a lot less than the true mean. Sample mean here, only a little bit more than the true mean. | Another simulation giving evidence that (n-1) gives us an unbiased estimate of variance.mp3 |
So for example, this sample right over here, it's all the way over to the right. The sample mean there was a lot more than the true mean. Sample mean here was a lot less than the true mean. Sample mean here, only a little bit more than the true mean. In the vertical axis, using this denominator, dividing by n, we calculate two different variances. One variance we use the sample mean. The other variance we use the population mean. | Another simulation giving evidence that (n-1) gives us an unbiased estimate of variance.mp3 |
Sample mean here, only a little bit more than the true mean. In the vertical axis, using this denominator, dividing by n, we calculate two different variances. One variance we use the sample mean. The other variance we use the population mean. And this, in the vertical axis, we compare the difference between the mean calculated with the sample mean versus the mean calculated with the population mean. So for example, this point right over here, when we calculate our mean with our sample mean, which is the normal way we do it, it significantly underestimates what the mean would have been if somehow we knew what the population mean was and we could calculate it that way. And you get this really interesting shape. | Another simulation giving evidence that (n-1) gives us an unbiased estimate of variance.mp3 |
The other variance we use the population mean. And this, in the vertical axis, we compare the difference between the mean calculated with the sample mean versus the mean calculated with the population mean. So for example, this point right over here, when we calculate our mean with our sample mean, which is the normal way we do it, it significantly underestimates what the mean would have been if somehow we knew what the population mean was and we could calculate it that way. And you get this really interesting shape. And it's something to think about. And he recommends thinking about why or what kind of a shape this actually is. The other interesting thing is, when you look at it this way, it's pretty clear this entire graph is sitting below the horizontal axis. | Another simulation giving evidence that (n-1) gives us an unbiased estimate of variance.mp3 |
And you get this really interesting shape. And it's something to think about. And he recommends thinking about why or what kind of a shape this actually is. The other interesting thing is, when you look at it this way, it's pretty clear this entire graph is sitting below the horizontal axis. So we're always, when we calculate our sample variance using this formula, when we use the sample mean to do it, which we typically do, we're always getting a lower variance than when we use the population mean. Now this over here, when we divide by n minus 1, we're not always underestimating. Sometimes we're overestimating it. | Another simulation giving evidence that (n-1) gives us an unbiased estimate of variance.mp3 |
The other interesting thing is, when you look at it this way, it's pretty clear this entire graph is sitting below the horizontal axis. So we're always, when we calculate our sample variance using this formula, when we use the sample mean to do it, which we typically do, we're always getting a lower variance than when we use the population mean. Now this over here, when we divide by n minus 1, we're not always underestimating. Sometimes we're overestimating it. And when you take the mean of all of these variances, you converge. And here we're overestimating it a little bit more. And just to be clear, what we're talking about in these three graphs, let me take a screenshot of it and explain it in a little bit more depth. | Another simulation giving evidence that (n-1) gives us an unbiased estimate of variance.mp3 |
Sometimes we're overestimating it. And when you take the mean of all of these variances, you converge. And here we're overestimating it a little bit more. And just to be clear, what we're talking about in these three graphs, let me take a screenshot of it and explain it in a little bit more depth. So just to be clear, in this red graph right over here, let me do this in a color close to, at least, so this orange, what this distance is for each of these samples, we're calculating the sample variance using, so let me, using the sample mean. And in this case, we are using n as our denominator, in this case right over here. And from that, we're subtracting the sample variance, or I guess you could call this some kind of pseudo-sample variance, if we somehow knew the population mean. | Another simulation giving evidence that (n-1) gives us an unbiased estimate of variance.mp3 |
And just to be clear, what we're talking about in these three graphs, let me take a screenshot of it and explain it in a little bit more depth. So just to be clear, in this red graph right over here, let me do this in a color close to, at least, so this orange, what this distance is for each of these samples, we're calculating the sample variance using, so let me, using the sample mean. And in this case, we are using n as our denominator, in this case right over here. And from that, we're subtracting the sample variance, or I guess you could call this some kind of pseudo-sample variance, if we somehow knew the population mean. This isn't something that you see a lot in statistics, but it's a gauge of how much we are underestimating our sample variance, given that we don't have the true population mean at our disposal. And so this is the distance we're calculating. And you see we are always underestimating. | Another simulation giving evidence that (n-1) gives us an unbiased estimate of variance.mp3 |
And from that, we're subtracting the sample variance, or I guess you could call this some kind of pseudo-sample variance, if we somehow knew the population mean. This isn't something that you see a lot in statistics, but it's a gauge of how much we are underestimating our sample variance, given that we don't have the true population mean at our disposal. And so this is the distance we're calculating. And you see we are always underestimating. Here, we overestimate a little bit, and we also underestimate. But when you take the mean, and when you average them all out, it converges to the actual value. So here, we're dividing by n minus 1. | Another simulation giving evidence that (n-1) gives us an unbiased estimate of variance.mp3 |
It is the number of successes after n trials, after n trials, where the probability of success, the probability of success, success for each trial is p. And this is a safe, this is a reasonable way to describe really any random, any binomial variable. We're assuming that each of these trials are independent. The probability stays constant. We have a finite number of trials right over here. Each trial results in either a very clear success or failure. So what we're gonna focus on in this video is, well, what would be the expected value of this binomial variable? What would the expected value, expected value of x be equal to? | Expected value of a binomial variable Random variables AP Statistics Khan Academy.mp3 |
We have a finite number of trials right over here. Each trial results in either a very clear success or failure. So what we're gonna focus on in this video is, well, what would be the expected value of this binomial variable? What would the expected value, expected value of x be equal to? And I will just cut to the chase and tell you the answer, and then later in this video, we'll prove it to ourselves a little bit more mathematically. The expected value of x, it turns out, is just going to be equal to the number of trials times the probability of success for each of those trials. And so if you wanted to make that a little bit more concrete, imagine if a trial is a free throw, taking a shot from the free throw line. | Expected value of a binomial variable Random variables AP Statistics Khan Academy.mp3 |
What would the expected value, expected value of x be equal to? And I will just cut to the chase and tell you the answer, and then later in this video, we'll prove it to ourselves a little bit more mathematically. The expected value of x, it turns out, is just going to be equal to the number of trials times the probability of success for each of those trials. And so if you wanted to make that a little bit more concrete, imagine if a trial is a free throw, taking a shot from the free throw line. Success, success is made shot, so you actually make the shot, the ball went in the basket. Your probability is, let me do this yellow color, your probability, this would be your free throw percentage, so let's say it's 30% or 0.3. And let's say, for the sake of argument, that we're taking 10 free throws, so n is equal to 10. | Expected value of a binomial variable Random variables AP Statistics Khan Academy.mp3 |
And so if you wanted to make that a little bit more concrete, imagine if a trial is a free throw, taking a shot from the free throw line. Success, success is made shot, so you actually make the shot, the ball went in the basket. Your probability is, let me do this yellow color, your probability, this would be your free throw percentage, so let's say it's 30% or 0.3. And let's say, for the sake of argument, that we're taking 10 free throws, so n is equal to 10. So this is making it all a lot more concrete. So in this particular scenario, your expected value, your expected value, if x is the number of made free throws after taking 10 free throws with a free throw percentage of 30%, well, based on what I just told you, it'd be n times b. It would be the number of trials times the probability of success in any one of those trials times 0.3, which is just going to be, of course, equal to three. | Expected value of a binomial variable Random variables AP Statistics Khan Academy.mp3 |
And let's say, for the sake of argument, that we're taking 10 free throws, so n is equal to 10. So this is making it all a lot more concrete. So in this particular scenario, your expected value, your expected value, if x is the number of made free throws after taking 10 free throws with a free throw percentage of 30%, well, based on what I just told you, it'd be n times b. It would be the number of trials times the probability of success in any one of those trials times 0.3, which is just going to be, of course, equal to three. Now, does that make intuitive sense? Well, if you're taking 10 shots with a 30% free throw percentage, it actually does feel natural that I would expect to make three shots. Now, with that out of the way, let's make ourselves feel good about this mathematically, and we're gonna leverage some of our expected value properties. | Expected value of a binomial variable Random variables AP Statistics Khan Academy.mp3 |
It would be the number of trials times the probability of success in any one of those trials times 0.3, which is just going to be, of course, equal to three. Now, does that make intuitive sense? Well, if you're taking 10 shots with a 30% free throw percentage, it actually does feel natural that I would expect to make three shots. Now, with that out of the way, let's make ourselves feel good about this mathematically, and we're gonna leverage some of our expected value properties. In particular, we're gonna leverage the fact that if I have the expected value of the sum of two independent random variables, let's say x plus y, it's going to be equal to the expected value of x plus the expected value of y that we talk about in other videos. And so, assuming this right over here, let's construct a new random variable. Let's call our random variable y. | Expected value of a binomial variable Random variables AP Statistics Khan Academy.mp3 |
Now, with that out of the way, let's make ourselves feel good about this mathematically, and we're gonna leverage some of our expected value properties. In particular, we're gonna leverage the fact that if I have the expected value of the sum of two independent random variables, let's say x plus y, it's going to be equal to the expected value of x plus the expected value of y that we talk about in other videos. And so, assuming this right over here, let's construct a new random variable. Let's call our random variable y. And we know the following things about y. The probability that y is equal to one is equal to p, and the probability that y is equal to zero is equal to one minus p. And these are the only two outcomes for this random variable. And so, you might be seeing where this is going. | Expected value of a binomial variable Random variables AP Statistics Khan Academy.mp3 |
Let's call our random variable y. And we know the following things about y. The probability that y is equal to one is equal to p, and the probability that y is equal to zero is equal to one minus p. And these are the only two outcomes for this random variable. And so, you might be seeing where this is going. You could view this random variable, it's really representing one trial. It becomes one in a success, zero when you don't have a success. And so, you could view our original random variable x right over here as being equal to y plus y, and we're gonna have 10 of these. | Expected value of a binomial variable Random variables AP Statistics Khan Academy.mp3 |
And so, you might be seeing where this is going. You could view this random variable, it's really representing one trial. It becomes one in a success, zero when you don't have a success. And so, you could view our original random variable x right over here as being equal to y plus y, and we're gonna have 10 of these. So, we're gonna have 10 y's. In the concrete sense, you could view the random variable y as equaling one if you make a free throw, and equaling zero if you don't make a free throw. It's really just representing one of those trials, and you can view x as the sum of n of those trials. | Expected value of a binomial variable Random variables AP Statistics Khan Academy.mp3 |
And so, you could view our original random variable x right over here as being equal to y plus y, and we're gonna have 10 of these. So, we're gonna have 10 y's. In the concrete sense, you could view the random variable y as equaling one if you make a free throw, and equaling zero if you don't make a free throw. It's really just representing one of those trials, and you can view x as the sum of n of those trials. Well, actually, let me be very clear here. I immediately went to the concrete, but I really should be saying n y's, because I wanna stay general right over here. So, there are n n y's right over here. | Expected value of a binomial variable Random variables AP Statistics Khan Academy.mp3 |
It's really just representing one of those trials, and you can view x as the sum of n of those trials. Well, actually, let me be very clear here. I immediately went to the concrete, but I really should be saying n y's, because I wanna stay general right over here. So, there are n n y's right over here. This was just a particular example, but I am going to try to stay general for the rest of the video, because now we are really trying to prove this result right over here. So, let's just take the expected value of both sides. So, what is it going to be? | Expected value of a binomial variable Random variables AP Statistics Khan Academy.mp3 |
So, there are n n y's right over here. This was just a particular example, but I am going to try to stay general for the rest of the video, because now we are really trying to prove this result right over here. So, let's just take the expected value of both sides. So, what is it going to be? So, we get the expected value of x is equal to, well, it's the expected value of all of this thing, but by that property right over here, this is going to be the expected value of y plus the expected value of y, plus, and we're gonna do this n times, plus the expected value of y, and we're gonna have n of these. So, we have n. And so, you could rewrite this as being equal to, so this is our n right over here. This is n times the expected value of y. | Expected value of a binomial variable Random variables AP Statistics Khan Academy.mp3 |
So, what is it going to be? So, we get the expected value of x is equal to, well, it's the expected value of all of this thing, but by that property right over here, this is going to be the expected value of y plus the expected value of y, plus, and we're gonna do this n times, plus the expected value of y, and we're gonna have n of these. So, we have n. And so, you could rewrite this as being equal to, so this is our n right over here. This is n times the expected value of y. Now, what is the expected value of y? Well, this is pretty straightforward. We can actually just do it directly. | Expected value of a binomial variable Random variables AP Statistics Khan Academy.mp3 |
This is n times the expected value of y. Now, what is the expected value of y? Well, this is pretty straightforward. We can actually just do it directly. The expected value of y, let me just write it over here. The expected value of y is just the probability weighted outcomes, and since there's only two discrete outcomes here, it's pretty easy to calculate. We have a probability of p of getting a one, so it's p times one, plus we have a probability of one minus p of getting a zero. | Expected value of a binomial variable Random variables AP Statistics Khan Academy.mp3 |
We can actually just do it directly. The expected value of y, let me just write it over here. The expected value of y is just the probability weighted outcomes, and since there's only two discrete outcomes here, it's pretty easy to calculate. We have a probability of p of getting a one, so it's p times one, plus we have a probability of one minus p of getting a zero. Well, what does this simplify to? Well, zero times anything, that's zero, and then you have one times p. This is just equal to p. So, expected value of y is just equal to p, and so there you have it. We get the expected value of x is 10 times the expected value, or the expected value of x is n times the expected value of y, and the expected value of y is p, so the expected value of x is equal to np. | Expected value of a binomial variable Random variables AP Statistics Khan Academy.mp3 |
You start off with any crazy distribution. It doesn't have to be crazy. It could be a nice normal distribution, but to really make the point that you don't have to have a normal distribution, I like to use crazy ones. So let's say you have some kind of crazy distribution that looks something like that. It could look like anything. So we've seen multiple times you take samples from this crazy distribution. So let's say you were to take samples of, let's say n is equal to 10. | Standard error of the mean Inferential statistics Probability and Statistics Khan Academy.mp3 |
So let's say you have some kind of crazy distribution that looks something like that. It could look like anything. So we've seen multiple times you take samples from this crazy distribution. So let's say you were to take samples of, let's say n is equal to 10. So we take 10 instances of this random variable, average them out, and then plot our average. We plot our average. We get one instance there. | Standard error of the mean Inferential statistics Probability and Statistics Khan Academy.mp3 |
So let's say you were to take samples of, let's say n is equal to 10. So we take 10 instances of this random variable, average them out, and then plot our average. We plot our average. We get one instance there. We keep doing that. We do that again. We take 10 samples from this random variable, average them, plot them again. | Standard error of the mean Inferential statistics Probability and Statistics Khan Academy.mp3 |
We get one instance there. We keep doing that. We do that again. We take 10 samples from this random variable, average them, plot them again. You plot it again, and eventually you do this a gazillion times, in theory infinite number of times, and you're going to approach the sampling distribution of the sample mean. And n equal 10, it's not gonna be a perfect normal distribution, but it's gonna be close. It'd be perfect only if n was infinity. | Standard error of the mean Inferential statistics Probability and Statistics Khan Academy.mp3 |
We take 10 samples from this random variable, average them, plot them again. You plot it again, and eventually you do this a gazillion times, in theory infinite number of times, and you're going to approach the sampling distribution of the sample mean. And n equal 10, it's not gonna be a perfect normal distribution, but it's gonna be close. It'd be perfect only if n was infinity. But let's say we eventually, all of our samples, you know, we get a lot of averages that are there, that stacks up there, that stacks up there, and eventually we'll approach something that looks something like that. And we've seen from the last video that, one, if, let's say we were to do it again, and this time let's say that n is equal to 20. One, the distribution that we get is going to be more normal, and maybe in future videos we'll delve even deeper into things like kurtosis and skew. | Standard error of the mean Inferential statistics Probability and Statistics Khan Academy.mp3 |
It'd be perfect only if n was infinity. But let's say we eventually, all of our samples, you know, we get a lot of averages that are there, that stacks up there, that stacks up there, and eventually we'll approach something that looks something like that. And we've seen from the last video that, one, if, let's say we were to do it again, and this time let's say that n is equal to 20. One, the distribution that we get is going to be more normal, and maybe in future videos we'll delve even deeper into things like kurtosis and skew. But it's gonna be more normal, but even more important, or I guess even more obviously to us, and we saw that in the experiment, it's gonna have a lower standard deviation. So they're all gonna have the same mean. Let's say the mean here is, you know, I don't know, let's say the mean here is five. | Standard error of the mean Inferential statistics Probability and Statistics Khan Academy.mp3 |
One, the distribution that we get is going to be more normal, and maybe in future videos we'll delve even deeper into things like kurtosis and skew. But it's gonna be more normal, but even more important, or I guess even more obviously to us, and we saw that in the experiment, it's gonna have a lower standard deviation. So they're all gonna have the same mean. Let's say the mean here is, you know, I don't know, let's say the mean here is five. Then the mean here is also gonna be five. The mean of our sampling distribution of the sample mean is gonna be five. And it doesn't matter what our n is. | Standard error of the mean Inferential statistics Probability and Statistics Khan Academy.mp3 |
Let's say the mean here is, you know, I don't know, let's say the mean here is five. Then the mean here is also gonna be five. The mean of our sampling distribution of the sample mean is gonna be five. And it doesn't matter what our n is. If our n is 20, it's still gonna be five, but our standard deviation is gonna be less in either of these scenarios. And we saw that just by experimenting. It might look like this. | Standard error of the mean Inferential statistics Probability and Statistics Khan Academy.mp3 |
And it doesn't matter what our n is. If our n is 20, it's still gonna be five, but our standard deviation is gonna be less in either of these scenarios. And we saw that just by experimenting. It might look like this. It's gonna be more normal, but it's gonna have a tighter standard deviation. So maybe it'll look like that. And if we did it with a even larger sample size, let me do that in a different color. | Standard error of the mean Inferential statistics Probability and Statistics Khan Academy.mp3 |
It might look like this. It's gonna be more normal, but it's gonna have a tighter standard deviation. So maybe it'll look like that. And if we did it with a even larger sample size, let me do that in a different color. If we did that with an even larger sample size, n is equal to 100, what we're gonna get is something that fits the normal distribution even better. We take 100 instances of this random variable, average them, plot it. 100 instances of this random variable, average them, plot it. | Standard error of the mean Inferential statistics Probability and Statistics Khan Academy.mp3 |
And if we did it with a even larger sample size, let me do that in a different color. If we did that with an even larger sample size, n is equal to 100, what we're gonna get is something that fits the normal distribution even better. We take 100 instances of this random variable, average them, plot it. 100 instances of this random variable, average them, plot it. We just keep doing that. If we keep doing that, what we're gonna have is something that's even more normal than either of these. So it's gonna be a much closer fit to a true normal distribution. | Standard error of the mean Inferential statistics Probability and Statistics Khan Academy.mp3 |
100 instances of this random variable, average them, plot it. We just keep doing that. If we keep doing that, what we're gonna have is something that's even more normal than either of these. So it's gonna be a much closer fit to a true normal distribution. But even more obvious to the human eye, it's gonna be even tighter. So it's going to be a very low standard deviation. It's gonna look something like that. | Standard error of the mean Inferential statistics Probability and Statistics Khan Academy.mp3 |
So it's gonna be a much closer fit to a true normal distribution. But even more obvious to the human eye, it's gonna be even tighter. So it's going to be a very low standard deviation. It's gonna look something like that. And I'll show you that on the simulation app in the next, or probably later in this video. So two things happen. As you increase your sample size for every time you do the average, two things are happening. | Standard error of the mean Inferential statistics Probability and Statistics Khan Academy.mp3 |
It's gonna look something like that. And I'll show you that on the simulation app in the next, or probably later in this video. So two things happen. As you increase your sample size for every time you do the average, two things are happening. You're becoming more normal, and your standard deviation is getting smaller. So the question might arise, is there a formula? So if I know the standard deviation, so this is my standard deviation of just my original probability density function. | Standard error of the mean Inferential statistics Probability and Statistics Khan Academy.mp3 |
As you increase your sample size for every time you do the average, two things are happening. You're becoming more normal, and your standard deviation is getting smaller. So the question might arise, is there a formula? So if I know the standard deviation, so this is my standard deviation of just my original probability density function. This is the mean of my original probability density function. So if I know the standard deviation, and I know n, n is gonna change depending on how many samples I'm taking every time I do a sample mean. If I know my standard deviation, or maybe if I know my variance, the variance is just the standard deviation squared. | Standard error of the mean Inferential statistics Probability and Statistics Khan Academy.mp3 |
So if I know the standard deviation, so this is my standard deviation of just my original probability density function. This is the mean of my original probability density function. So if I know the standard deviation, and I know n, n is gonna change depending on how many samples I'm taking every time I do a sample mean. If I know my standard deviation, or maybe if I know my variance, the variance is just the standard deviation squared. If you don't remember that, you might wanna review those videos. But if I know the variance of my original distribution, and if I know what my n is, how many samples I'm gonna take every time before I average them in order to plot one thing in my sampling distribution of my sample mean, is there a way to predict what the mean of these distributions are? And so this, sorry, the standard deviation of these distributions, and to make, so you don't get confused between that and that, and let me say the variance. | Standard error of the mean Inferential statistics Probability and Statistics Khan Academy.mp3 |
If I know my standard deviation, or maybe if I know my variance, the variance is just the standard deviation squared. If you don't remember that, you might wanna review those videos. But if I know the variance of my original distribution, and if I know what my n is, how many samples I'm gonna take every time before I average them in order to plot one thing in my sampling distribution of my sample mean, is there a way to predict what the mean of these distributions are? And so this, sorry, the standard deviation of these distributions, and to make, so you don't get confused between that and that, and let me say the variance. If you know the variance, you can figure out the standard deviation. One is just the square root of the other. So this is the variance of our original distribution. | Standard error of the mean Inferential statistics Probability and Statistics Khan Academy.mp3 |
And so this, sorry, the standard deviation of these distributions, and to make, so you don't get confused between that and that, and let me say the variance. If you know the variance, you can figure out the standard deviation. One is just the square root of the other. So this is the variance of our original distribution. Now, to show that this is the variance of our sampling distribution of our sample mean, we'll write it right here. This is the variance of our mean, of our sample mean. Remember, the sample, our true mean is this, that the Greek letter mu is your true mean. | Standard error of the mean Inferential statistics Probability and Statistics Khan Academy.mp3 |
So this is the variance of our original distribution. Now, to show that this is the variance of our sampling distribution of our sample mean, we'll write it right here. This is the variance of our mean, of our sample mean. Remember, the sample, our true mean is this, that the Greek letter mu is your true mean. This is equal to the mean, while an X with a line over it means sample mean. Sample mean. So here, what we're saying is this is the variance of our sample means. | Standard error of the mean Inferential statistics Probability and Statistics Khan Academy.mp3 |
Remember, the sample, our true mean is this, that the Greek letter mu is your true mean. This is equal to the mean, while an X with a line over it means sample mean. Sample mean. So here, what we're saying is this is the variance of our sample means. Now, this is gonna be a true distribution. This isn't an estimate. This is, there's some, you know, if we magically knew this distribution, there's some true variance here. | Standard error of the mean Inferential statistics Probability and Statistics Khan Academy.mp3 |
So here, what we're saying is this is the variance of our sample means. Now, this is gonna be a true distribution. This isn't an estimate. This is, there's some, you know, if we magically knew this distribution, there's some true variance here. And of course, the mean, so this has a mean. This right here, we can just get our notation right. This is the mean of the sampling distribution of the sampling mean. | Standard error of the mean Inferential statistics Probability and Statistics Khan Academy.mp3 |
This is, there's some, you know, if we magically knew this distribution, there's some true variance here. And of course, the mean, so this has a mean. This right here, we can just get our notation right. This is the mean of the sampling distribution of the sampling mean. So this is the mean of our means. It just happens to be the same thing. This is the mean of our sample means. | Standard error of the mean Inferential statistics Probability and Statistics Khan Academy.mp3 |
This is the mean of the sampling distribution of the sampling mean. So this is the mean of our means. It just happens to be the same thing. This is the mean of our sample means. It's gonna be the same thing as that, especially if we do the trial over and over again. But anyway, the point of this video, is there any way to figure out this variance, given the variance of the original distribution and your N? And it turns out there is. | Standard error of the mean Inferential statistics Probability and Statistics Khan Academy.mp3 |
This is the mean of our sample means. It's gonna be the same thing as that, especially if we do the trial over and over again. But anyway, the point of this video, is there any way to figure out this variance, given the variance of the original distribution and your N? And it turns out there is. And I'm not gonna do a proof here. I really wanna give you the intuition of it. And I think you already do have the sense that every trial you take, if you take 100, you're much more likely, when you average those out, to get close to the true mean than if you took an N of two or an N of five. | Standard error of the mean Inferential statistics Probability and Statistics Khan Academy.mp3 |
And it turns out there is. And I'm not gonna do a proof here. I really wanna give you the intuition of it. And I think you already do have the sense that every trial you take, if you take 100, you're much more likely, when you average those out, to get close to the true mean than if you took an N of two or an N of five. You're just very unlikely to be far away, right, if you took 100 trials as opposed to taking five. So I think you know that, in some way, it should be inversely proportional to N. The larger your N, the smaller standard deviation. And actually, it turns out it's about as simple as possible. | Standard error of the mean Inferential statistics Probability and Statistics Khan Academy.mp3 |
And I think you already do have the sense that every trial you take, if you take 100, you're much more likely, when you average those out, to get close to the true mean than if you took an N of two or an N of five. You're just very unlikely to be far away, right, if you took 100 trials as opposed to taking five. So I think you know that, in some way, it should be inversely proportional to N. The larger your N, the smaller standard deviation. And actually, it turns out it's about as simple as possible. It's one of those magical things about mathematics. And I'll prove it to you one day. I want to give you a working knowledge first. | Standard error of the mean Inferential statistics Probability and Statistics Khan Academy.mp3 |
And actually, it turns out it's about as simple as possible. It's one of those magical things about mathematics. And I'll prove it to you one day. I want to give you a working knowledge first. In statistics, I'm always struggling whether I should be formal and giving you rigorous proofs, but I've kind of come to the conclusion that it's more important to get the working knowledge first in statistics. And then later, once you've gotten all of that down, we can get into the real deep math of it and prove it to you. But I think experimental proofs are kind of all you need for right now, using those simulations to show that they're really true. | Standard error of the mean Inferential statistics Probability and Statistics Khan Academy.mp3 |
I want to give you a working knowledge first. In statistics, I'm always struggling whether I should be formal and giving you rigorous proofs, but I've kind of come to the conclusion that it's more important to get the working knowledge first in statistics. And then later, once you've gotten all of that down, we can get into the real deep math of it and prove it to you. But I think experimental proofs are kind of all you need for right now, using those simulations to show that they're really true. So it turns out that the variance of your sampling distribution of your sample mean is equal to the variance of your original distribution, that guy right there, divided by N. That's all it is. So if this up here has a variance of, let's say this up here has a variance of 20, I'm just making that number up, then, and then let's say your N is 20, then the variance of your sampling distribution of your sample mean for N of 20, well, you're just gonna take that, the variance up here, your variance is 20, divided by your N, 20. So here, your variance is going to be 20 divided by 20, which is equal to one. | Standard error of the mean Inferential statistics Probability and Statistics Khan Academy.mp3 |
But I think experimental proofs are kind of all you need for right now, using those simulations to show that they're really true. So it turns out that the variance of your sampling distribution of your sample mean is equal to the variance of your original distribution, that guy right there, divided by N. That's all it is. So if this up here has a variance of, let's say this up here has a variance of 20, I'm just making that number up, then, and then let's say your N is 20, then the variance of your sampling distribution of your sample mean for N of 20, well, you're just gonna take that, the variance up here, your variance is 20, divided by your N, 20. So here, your variance is going to be 20 divided by 20, which is equal to one. This is the variance of your original probability distribution, and this is your N. What's your standard deviation gonna be? What's gonna be the square root of that? Standard deviation is gonna be the square root of one, well, that's also going to be one. | Standard error of the mean Inferential statistics Probability and Statistics Khan Academy.mp3 |
So here, your variance is going to be 20 divided by 20, which is equal to one. This is the variance of your original probability distribution, and this is your N. What's your standard deviation gonna be? What's gonna be the square root of that? Standard deviation is gonna be the square root of one, well, that's also going to be one. So we could also write this. We could take the square root of both sides of this and say the standard deviation of the sampling distribution of the, of the standard deviation of the sampling distribution of the sample mean, it's often called the standard deviation of the mean, and it's also called, I'm gonna write this down, the standard error of the mean, standard error of the mean. All of these things that I just mentioned, these all just mean the standard deviation of the sampling distribution of the sample mean. | Standard error of the mean Inferential statistics Probability and Statistics Khan Academy.mp3 |
Standard deviation is gonna be the square root of one, well, that's also going to be one. So we could also write this. We could take the square root of both sides of this and say the standard deviation of the sampling distribution of the, of the standard deviation of the sampling distribution of the sample mean, it's often called the standard deviation of the mean, and it's also called, I'm gonna write this down, the standard error of the mean, standard error of the mean. All of these things that I just mentioned, these all just mean the standard deviation of the sampling distribution of the sample mean. That's why this is confusing, because you use the word mean and sample over and over again, and if it confuses you, let me know, I'll do another video or pause and repeat it, whatever. But if we just take the square root of both sides, the standard error of the mean, or the standard deviation of the sampling distribution of the sample mean is equal to the standard deviation of your original, of your original function, of your original probability density function, which could be very non-normal, divided by the square root of n. I just took the square root of both sides of this equation. I personally, I like to remember this, that the variance is just inversely proportional to n, and then I like to go back to this, because this is very simple in my head. | Standard error of the mean Inferential statistics Probability and Statistics Khan Academy.mp3 |
All of these things that I just mentioned, these all just mean the standard deviation of the sampling distribution of the sample mean. That's why this is confusing, because you use the word mean and sample over and over again, and if it confuses you, let me know, I'll do another video or pause and repeat it, whatever. But if we just take the square root of both sides, the standard error of the mean, or the standard deviation of the sampling distribution of the sample mean is equal to the standard deviation of your original, of your original function, of your original probability density function, which could be very non-normal, divided by the square root of n. I just took the square root of both sides of this equation. I personally, I like to remember this, that the variance is just inversely proportional to n, and then I like to go back to this, because this is very simple in my head. You just take the variance divided by n. Oh, and if I want the standard deviation, I just take the square roots of both sides, and I get this formula. So here, the standard deviation, when n is 20, the standard deviation of the sampling distribution of the sample mean is gonna be one. Here, when n is 100, well, our variance, our variance here, when n is equal to 100, so our variance of the sampling mean of the sample distribution, or our variance of the mean, of the sample mean, we could say, is going to be equal to 20. | Standard error of the mean Inferential statistics Probability and Statistics Khan Academy.mp3 |
I personally, I like to remember this, that the variance is just inversely proportional to n, and then I like to go back to this, because this is very simple in my head. You just take the variance divided by n. Oh, and if I want the standard deviation, I just take the square roots of both sides, and I get this formula. So here, the standard deviation, when n is 20, the standard deviation of the sampling distribution of the sample mean is gonna be one. Here, when n is 100, well, our variance, our variance here, when n is equal to 100, so our variance of the sampling mean of the sample distribution, or our variance of the mean, of the sample mean, we could say, is going to be equal to 20. This guy's variance divided by n. So it equals, n is 100, so it equals 1 5th. Now, this guy's standard deviation, or the standard deviation of the sampling distribution of the sample mean, or the standard error of the mean, is gonna be the square root of that. So one over the square root of five. | Standard error of the mean Inferential statistics Probability and Statistics Khan Academy.mp3 |
Here, when n is 100, well, our variance, our variance here, when n is equal to 100, so our variance of the sampling mean of the sample distribution, or our variance of the mean, of the sample mean, we could say, is going to be equal to 20. This guy's variance divided by n. So it equals, n is 100, so it equals 1 5th. Now, this guy's standard deviation, or the standard deviation of the sampling distribution of the sample mean, or the standard error of the mean, is gonna be the square root of that. So one over the square root of five. And so, this guy's a little bit under 1 1 2 standard deviation while this guy had a standard deviation of one. So you see, it's definitely thinner. Now, I know what you're saying. | Standard error of the mean Inferential statistics Probability and Statistics Khan Academy.mp3 |
So one over the square root of five. And so, this guy's a little bit under 1 1 2 standard deviation while this guy had a standard deviation of one. So you see, it's definitely thinner. Now, I know what you're saying. Well, Sal, you just gave a formula. I don't necessarily believe you. Well, let's see if we can prove it to ourselves using the simulation. | Standard error of the mean Inferential statistics Probability and Statistics Khan Academy.mp3 |
Now, I know what you're saying. Well, Sal, you just gave a formula. I don't necessarily believe you. Well, let's see if we can prove it to ourselves using the simulation. So I'll, just for fun, let me make a, I'll just mess with this distribution a little bit. So that's my new distribution. And let me take an n of, let me take two things that's easy to take the square root of, because if we're looking at standard deviation. | Standard error of the mean Inferential statistics Probability and Statistics Khan Academy.mp3 |
Well, let's see if we can prove it to ourselves using the simulation. So I'll, just for fun, let me make a, I'll just mess with this distribution a little bit. So that's my new distribution. And let me take an n of, let me take two things that's easy to take the square root of, because if we're looking at standard deviation. So let's take, we'll take an n of 16, and an n of 25. And let's, well, I'll do a, let's do 10,000 trials. So in this case, every one of the trials, we're gonna take 16 samples from here, average them, plot it here, and then do a frequency plot. | Standard error of the mean Inferential statistics Probability and Statistics Khan Academy.mp3 |
And let me take an n of, let me take two things that's easy to take the square root of, because if we're looking at standard deviation. So let's take, we'll take an n of 16, and an n of 25. And let's, well, I'll do a, let's do 10,000 trials. So in this case, every one of the trials, we're gonna take 16 samples from here, average them, plot it here, and then do a frequency plot. Here, we're gonna do 25 at a time, and then average them. I'll do it once animated, just to remember. So I'm taking 16 samples, plot it there. | Standard error of the mean Inferential statistics Probability and Statistics Khan Academy.mp3 |
So in this case, every one of the trials, we're gonna take 16 samples from here, average them, plot it here, and then do a frequency plot. Here, we're gonna do 25 at a time, and then average them. I'll do it once animated, just to remember. So I'm taking 16 samples, plot it there. I take 16 samples, as described by this probability density function, or 25 now, plot it down here. Now, if I do that 10,000 times, what do I get? What do I get? | Standard error of the mean Inferential statistics Probability and Statistics Khan Academy.mp3 |
So I'm taking 16 samples, plot it there. I take 16 samples, as described by this probability density function, or 25 now, plot it down here. Now, if I do that 10,000 times, what do I get? What do I get? All right, so here, you know, just visually, you can tell just when n was larger, the standard deviation here is smaller. This is more squeezed together. But actually, let's write, let's write this stuff down. | Standard error of the mean Inferential statistics Probability and Statistics Khan Academy.mp3 |
What do I get? All right, so here, you know, just visually, you can tell just when n was larger, the standard deviation here is smaller. This is more squeezed together. But actually, let's write, let's write this stuff down. Let's see if I can remember it here. Here, n is, so in this random distribution I made, my standard deviation was 9.3. I'm gonna remember these. | Standard error of the mean Inferential statistics Probability and Statistics Khan Academy.mp3 |
But actually, let's write, let's write this stuff down. Let's see if I can remember it here. Here, n is, so in this random distribution I made, my standard deviation was 9.3. I'm gonna remember these. Our standard deviation for the original thing was 9.3. And so, standard deviation here was 2.3, and the standard deviation here is 1.87. Let's see if it, if it conforms to our formula. | Standard error of the mean Inferential statistics Probability and Statistics Khan Academy.mp3 |
I'm gonna remember these. Our standard deviation for the original thing was 9.3. And so, standard deviation here was 2.3, and the standard deviation here is 1.87. Let's see if it, if it conforms to our formula. So I'm gonna take this offscreen for a second, and I'm gonna go back and do some mathematics. So I have this on my other screen, so I can remember those numbers. So in the trial we just did, my wacky distribution had a standard deviation of 9.3. | Standard error of the mean Inferential statistics Probability and Statistics Khan Academy.mp3 |
Let's see if it, if it conforms to our formula. So I'm gonna take this offscreen for a second, and I'm gonna go back and do some mathematics. So I have this on my other screen, so I can remember those numbers. So in the trial we just did, my wacky distribution had a standard deviation of 9.3. When n is equal to, let me do this in another color. When n was equal to 16, just doing the experiment, doing a bunch of trials, and averaging, and doing all the things, we got the standard deviation of the sampling distribution of the sample mean, or the standard error of the mean. We experimentally determined it to be 2.33. | Standard error of the mean Inferential statistics Probability and Statistics Khan Academy.mp3 |
So in the trial we just did, my wacky distribution had a standard deviation of 9.3. When n is equal to, let me do this in another color. When n was equal to 16, just doing the experiment, doing a bunch of trials, and averaging, and doing all the things, we got the standard deviation of the sampling distribution of the sample mean, or the standard error of the mean. We experimentally determined it to be 2.33. And then when n is equal to 25, when n is equal to 25, we got the standard error of the mean being equal to 1.87. Let's see if it conforms to our formulas. So we know that the variance, or we could almost say the variance of the mean, or the standard error, well, you know, the variance of the sampling distribution of the sample mean is equal to the variance of our original distribution divided by n. Take the square roots of both sides, and then you get standard error of the mean is equal to standard deviation of your original distribution divided by the square root of n. So let's see if this works out for these two things. | Standard error of the mean Inferential statistics Probability and Statistics Khan Academy.mp3 |
We experimentally determined it to be 2.33. And then when n is equal to 25, when n is equal to 25, we got the standard error of the mean being equal to 1.87. Let's see if it conforms to our formulas. So we know that the variance, or we could almost say the variance of the mean, or the standard error, well, you know, the variance of the sampling distribution of the sample mean is equal to the variance of our original distribution divided by n. Take the square roots of both sides, and then you get standard error of the mean is equal to standard deviation of your original distribution divided by the square root of n. So let's see if this works out for these two things. So if I were to take 9.3, so let me do this case. So 9.3 divided by the square root of 16, right? N is 16. | Standard error of the mean Inferential statistics Probability and Statistics Khan Academy.mp3 |
So we know that the variance, or we could almost say the variance of the mean, or the standard error, well, you know, the variance of the sampling distribution of the sample mean is equal to the variance of our original distribution divided by n. Take the square roots of both sides, and then you get standard error of the mean is equal to standard deviation of your original distribution divided by the square root of n. So let's see if this works out for these two things. So if I were to take 9.3, so let me do this case. So 9.3 divided by the square root of 16, right? N is 16. So divided by the square root of 16, which is four, what do I get? So 9.3 divided by four. Let me get a little calculator out here. | Standard error of the mean Inferential statistics Probability and Statistics Khan Academy.mp3 |
N is 16. So divided by the square root of 16, which is four, what do I get? So 9.3 divided by four. Let me get a little calculator out here. Let's see, we have, let me clear it out. We wanted to divide 9.3 divided by four. 9.3 divided by our square root of n, n was 16, so divided by four is equal to 2.32. | Standard error of the mean Inferential statistics Probability and Statistics Khan Academy.mp3 |
Let me get a little calculator out here. Let's see, we have, let me clear it out. We wanted to divide 9.3 divided by four. 9.3 divided by our square root of n, n was 16, so divided by four is equal to 2.32. 2.32. So this is equal to, this is equal to 2.32, which is pretty darn close to 2.33. This was after 10,000 trials. | Standard error of the mean Inferential statistics Probability and Statistics Khan Academy.mp3 |
9.3 divided by our square root of n, n was 16, so divided by four is equal to 2.32. 2.32. So this is equal to, this is equal to 2.32, which is pretty darn close to 2.33. This was after 10,000 trials. Maybe right after this, I'll see what happens if we did 20,000 or 30,000 trials where we take samples of 16 and average them. Now let's look at this. Here, we would take 9.3. | Standard error of the mean Inferential statistics Probability and Statistics Khan Academy.mp3 |
This was after 10,000 trials. Maybe right after this, I'll see what happens if we did 20,000 or 30,000 trials where we take samples of 16 and average them. Now let's look at this. Here, we would take 9.3. So let me draw a little line here. Maybe scroll over, that might be better. So we take our standard deviation of our original distribution. | Standard error of the mean Inferential statistics Probability and Statistics Khan Academy.mp3 |
Here, we would take 9.3. So let me draw a little line here. Maybe scroll over, that might be better. So we take our standard deviation of our original distribution. So just that formula that we derived right here would tell us that our standard error should be equal to the standard deviation of our original distribution, 9.3, divided by the square root of n, divided by the square root of 25, right? Four was just the square root of 16. So this is equal to 9.3 divided by five, and let's see if it's 1.87. | Standard error of the mean Inferential statistics Probability and Statistics Khan Academy.mp3 |
So we take our standard deviation of our original distribution. So just that formula that we derived right here would tell us that our standard error should be equal to the standard deviation of our original distribution, 9.3, divided by the square root of n, divided by the square root of 25, right? Four was just the square root of 16. So this is equal to 9.3 divided by five, and let's see if it's 1.87. So let me get my calculator back. So if I take 9.3 divided by five, what do I get? 1.86, which is very close to 1.87. | Standard error of the mean Inferential statistics Probability and Statistics Khan Academy.mp3 |
So this is equal to 9.3 divided by five, and let's see if it's 1.87. So let me get my calculator back. So if I take 9.3 divided by five, what do I get? 1.86, which is very close to 1.87. So we got, we got in this case, 1.86. 1.86. So as you can see, what we got experimentally was almost exactly, and this was after 10,000 trials, of what you would expect. | Standard error of the mean Inferential statistics Probability and Statistics Khan Academy.mp3 |
1.86, which is very close to 1.87. So we got, we got in this case, 1.86. 1.86. So as you can see, what we got experimentally was almost exactly, and this was after 10,000 trials, of what you would expect. Let's do another 10,000. So you got another 10,000 trials. Well, we're still in the ballpark. | Standard error of the mean Inferential statistics Probability and Statistics Khan Academy.mp3 |
So as you can see, what we got experimentally was almost exactly, and this was after 10,000 trials, of what you would expect. Let's do another 10,000. So you got another 10,000 trials. Well, we're still in the ballpark. We're not gonna, maybe I can't hope to get the exact number, you know, rounded or whatever. But as you can see, hopefully that'll be pretty satisfying to you, that the variance of the sampling distribution of the sample mean, the variance of the sampling distribution sampling mean, is just going to be equal to the variance of your original distribution, no matter how wacky that distribution might be, divided by your sample size. By the number of samples you take for when, for every basket that you average, I guess is the best way to think about it. | Standard error of the mean Inferential statistics Probability and Statistics Khan Academy.mp3 |
Well, we're still in the ballpark. We're not gonna, maybe I can't hope to get the exact number, you know, rounded or whatever. But as you can see, hopefully that'll be pretty satisfying to you, that the variance of the sampling distribution of the sample mean, the variance of the sampling distribution sampling mean, is just going to be equal to the variance of your original distribution, no matter how wacky that distribution might be, divided by your sample size. By the number of samples you take for when, for every basket that you average, I guess is the best way to think about it. And you know, sometimes this can get confusing because you are taking samples of averages based on samples. So when someone says sample size, you're like, is sample size the number of times I took averages or the number of things I'm taking averages of each time? And you know, it doesn't hurt to clarify that. | Standard error of the mean Inferential statistics Probability and Statistics Khan Academy.mp3 |
By the number of samples you take for when, for every basket that you average, I guess is the best way to think about it. And you know, sometimes this can get confusing because you are taking samples of averages based on samples. So when someone says sample size, you're like, is sample size the number of times I took averages or the number of things I'm taking averages of each time? And you know, it doesn't hurt to clarify that. Normally when they talk about sample size, they're talking about n. And at least in my head, when I think of the trials as you take a sample size of 16, you average it, that's one trial and you plot it. Then you do it again and you do another trial and you do it over and over again. But anyway, hopefully this makes everything clear and then you now also understand how to get to the standard error of the mean. | Standard error of the mean Inferential statistics Probability and Statistics Khan Academy.mp3 |
So it reads, Harry Potter is at Ollivander's Wand Shop. As we all know, the wand must choose the wizard. So Harry cannot make the choice himself. He interprets the wand selection as a random process so he can compare the probabilities of different outcomes. The wood types available are holly, elm, maple, and wenge, wenge, wenge? The core materials on offer are phoenix feather, unicorn hair, dragon scale, raven feather, and thestral tail. All right. | Describing subsets of sample spaces exercise Probability and Statistics Khan Academy.mp3 |
He interprets the wand selection as a random process so he can compare the probabilities of different outcomes. The wood types available are holly, elm, maple, and wenge, wenge, wenge? The core materials on offer are phoenix feather, unicorn hair, dragon scale, raven feather, and thestral tail. All right. Based on the sample space of possible outcomes listed below, what is more likely? And so we see here we have four different types of woods for the wand, and then each of those could be combined with five different types of core. Phoenix feather, unicorn hair, dragon scale, raven feather, and thestral tail. | Describing subsets of sample spaces exercise Probability and Statistics Khan Academy.mp3 |
All right. Based on the sample space of possible outcomes listed below, what is more likely? And so we see here we have four different types of woods for the wand, and then each of those could be combined with five different types of core. Phoenix feather, unicorn hair, dragon scale, raven feather, and thestral tail. And so that gives us four different woods, and each of those can be combined for five different cores, 20 possible outcomes. And they don't say it here, but the way they're talking, I guess we can, I'm gonna go with the assumption that they're equally likely outcomes, although it would have been nice if they said that these are all equally likely, but these are the 20 outcomes. And so which of these are more likely? | Describing subsets of sample spaces exercise Probability and Statistics Khan Academy.mp3 |
Phoenix feather, unicorn hair, dragon scale, raven feather, and thestral tail. And so that gives us four different woods, and each of those can be combined for five different cores, 20 possible outcomes. And they don't say it here, but the way they're talking, I guess we can, I'm gonna go with the assumption that they're equally likely outcomes, although it would have been nice if they said that these are all equally likely, but these are the 20 outcomes. And so which of these are more likely? The wand that selects Harry will be made of holly or unicorn hair. So how many of those outcomes involve this? So holly are these five outcomes, and if you said holly or unicorn hair, it's gonna be these five outcomes plus, well this one involves unicorn hair, but we've already included this one. | Describing subsets of sample spaces exercise Probability and Statistics Khan Academy.mp3 |
And so which of these are more likely? The wand that selects Harry will be made of holly or unicorn hair. So how many of those outcomes involve this? So holly are these five outcomes, and if you said holly or unicorn hair, it's gonna be these five outcomes plus, well this one involves unicorn hair, but we've already included this one. But the other ones that's not included for the holly that involve unicorn hair are the elm unicorn, the maple unicorn, and the wenge unicorn. So it's these five plus these three right over here. So eight of these 20 outcomes. | Describing subsets of sample spaces exercise Probability and Statistics Khan Academy.mp3 |
So holly are these five outcomes, and if you said holly or unicorn hair, it's gonna be these five outcomes plus, well this one involves unicorn hair, but we've already included this one. But the other ones that's not included for the holly that involve unicorn hair are the elm unicorn, the maple unicorn, and the wenge unicorn. So it's these five plus these three right over here. So eight of these 20 outcomes. And if these are all equally likely outcomes, that means there's an 8 20th probability of a wand that will be made of holly or unicorn hair. So this is 8 20th, so that's the same thing as 4 10th, so 40% chance. Now the wand that selects Harry will be made of holly and unicorn hair. | Describing subsets of sample spaces exercise Probability and Statistics Khan Academy.mp3 |
So eight of these 20 outcomes. And if these are all equally likely outcomes, that means there's an 8 20th probability of a wand that will be made of holly or unicorn hair. So this is 8 20th, so that's the same thing as 4 10th, so 40% chance. Now the wand that selects Harry will be made of holly and unicorn hair. Well holly and unicorn hair, that's only one out of the 20 outcomes. So this of course is going to be a higher probability. It actually includes this outcome, and then seven other outcomes. | Describing subsets of sample spaces exercise Probability and Statistics Khan Academy.mp3 |
Now the wand that selects Harry will be made of holly and unicorn hair. Well holly and unicorn hair, that's only one out of the 20 outcomes. So this of course is going to be a higher probability. It actually includes this outcome, and then seven other outcomes. So this is, the first choice includes the outcome for the second choice, plus seven other outcomes. So this is definitely going to be a higher probability. Let's do a couple more of these, or at least one more of these. | Describing subsets of sample spaces exercise Probability and Statistics Khan Academy.mp3 |
It actually includes this outcome, and then seven other outcomes. So this is, the first choice includes the outcome for the second choice, plus seven other outcomes. So this is definitely going to be a higher probability. Let's do a couple more of these, or at least one more of these. You and a friend are playing fire, water, sponge. I've never played that game. In this game, each of the two players chooses fire, water, or sponge. | Describing subsets of sample spaces exercise Probability and Statistics Khan Academy.mp3 |
Let's do a couple more of these, or at least one more of these. You and a friend are playing fire, water, sponge. I've never played that game. In this game, each of the two players chooses fire, water, or sponge. Both players reveal their choice at the same time, and the winner is determined based on the choices. I guess this is like rock, paper, scissors. Fire beats sponge by burning it. | Describing subsets of sample spaces exercise Probability and Statistics Khan Academy.mp3 |
In this game, each of the two players chooses fire, water, or sponge. Both players reveal their choice at the same time, and the winner is determined based on the choices. I guess this is like rock, paper, scissors. Fire beats sponge by burning it. Sponge beats water by soaking it up. And water beats fire by putting it out. All right, well, it kind of makes sense. | Describing subsets of sample spaces exercise Probability and Statistics Khan Academy.mp3 |
Fire beats sponge by burning it. Sponge beats water by soaking it up. And water beats fire by putting it out. All right, well, it kind of makes sense. If both players choose the same object, it is a tie. All the possible outcomes of the game are listed below. If we take outcomes one, three, four, five, seven, and eight as a subset of the sample space, which of the following statements, which of the statements below, describe this subset? | Describing subsets of sample spaces exercise Probability and Statistics Khan Academy.mp3 |
All right, well, it kind of makes sense. If both players choose the same object, it is a tie. All the possible outcomes of the game are listed below. If we take outcomes one, three, four, five, seven, and eight as a subset of the sample space, which of the following statements, which of the statements below, describe this subset? So let's look at the outcomes that they have over here. Well, it makes sense that there are nine possible outcomes, because for each of the three choices I can make, there's going to be three choices that my friend can make. So three times three is nine. | Describing subsets of sample spaces exercise Probability and Statistics Khan Academy.mp3 |
If we take outcomes one, three, four, five, seven, and eight as a subset of the sample space, which of the following statements, which of the statements below, describe this subset? So let's look at the outcomes that they have over here. Well, it makes sense that there are nine possible outcomes, because for each of the three choices I can make, there's going to be three choices that my friend can make. So three times three is nine. Let's see, they've highlighted these red outcomes, outcome one, three, four, five, seven, and eight. So let's see what's common about them. Outcome one, fire, I get fire, friend gets water. | Describing subsets of sample spaces exercise Probability and Statistics Khan Academy.mp3 |
So three times three is nine. Let's see, they've highlighted these red outcomes, outcome one, three, four, five, seven, and eight. So let's see what's common about them. Outcome one, fire, I get fire, friend gets water. OK, so let's see, my friend would win. Outcome three, I pick fire, my friend does sponge. So actually, I would win that one. | Describing subsets of sample spaces exercise Probability and Statistics Khan Academy.mp3 |
Outcome one, fire, I get fire, friend gets water. OK, so let's see, my friend would win. Outcome three, I pick fire, my friend does sponge. So actually, I would win that one. And then outcome four, water, fire. And then outcome five, water, sponge. Sponge, huh, these are all, I don't see a pattern just yet. | Describing subsets of sample spaces exercise Probability and Statistics Khan Academy.mp3 |
So actually, I would win that one. And then outcome four, water, fire. And then outcome five, water, sponge. Sponge, huh, these are all, I don't see a pattern just yet. Let's look at the choices. The subset consists of all outcomes where your friend does not win. All outcomes where your friend does not win. | Describing subsets of sample spaces exercise Probability and Statistics Khan Academy.mp3 |