idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
⌀ | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
⌀ | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
---|---|---|---|---|---|---|
55,801 | Calculate variance of $X^2$ when $X$ is a random variable with mean $0$ and variance $\sigma_x^2$ | If you have only the mean and variance of $X$ as 0 and $\sigma_x^2$, then there is insufficient information to calculate the variance of $X^2$, which is
$$
E[X^4] - E[X^2]^2 = E[X^4] - \sigma_x^4.
$$
For a normal R.V. $\sim N(0, \sigma_x^2)$, $E[X] = 0, E[X^2] = \sigma_x^2, E[X^4] = 3 \sigma_x^4$.
For a uniform R.V. $\sim U(-1.5 \sigma_x, 1.5 \sigma_x)$, $E[X] = 0, E[X^2] = \sigma_x^2, E[X^4] = 9 / 5 \sigma_x^4$.
It's easy to build distributions for which this is infinite. | Calculate variance of $X^2$ when $X$ is a random variable with mean $0$ and variance $\sigma_x^2$ | If you have only the mean and variance of $X$ as 0 and $\sigma_x^2$, then there is insufficient information to calculate the variance of $X^2$, which is
$$
E[X^4] - E[X^2]^2 = E[X^4] - \sigma_x^4.
$$
| Calculate variance of $X^2$ when $X$ is a random variable with mean $0$ and variance $\sigma_x^2$
If you have only the mean and variance of $X$ as 0 and $\sigma_x^2$, then there is insufficient information to calculate the variance of $X^2$, which is
$$
E[X^4] - E[X^2]^2 = E[X^4] - \sigma_x^4.
$$
For a normal R.V. $\sim N(0, \sigma_x^2)$, $E[X] = 0, E[X^2] = \sigma_x^2, E[X^4] = 3 \sigma_x^4$.
For a uniform R.V. $\sim U(-1.5 \sigma_x, 1.5 \sigma_x)$, $E[X] = 0, E[X^2] = \sigma_x^2, E[X^4] = 9 / 5 \sigma_x^4$.
It's easy to build distributions for which this is infinite. | Calculate variance of $X^2$ when $X$ is a random variable with mean $0$ and variance $\sigma_x^2$
If you have only the mean and variance of $X$ as 0 and $\sigma_x^2$, then there is insufficient information to calculate the variance of $X^2$, which is
$$
E[X^4] - E[X^2]^2 = E[X^4] - \sigma_x^4.
$$
|
55,802 | Calculating pooled p-values manually | This is for anyone who is interested, after reading pp. 37-43 in Flexible Imputation of Missing Data by Stef van Buuren. If we call the adjusted degrees of freedom nu
m <- nrow(mat)
lambda <- (betweenVar + (betweenVar/m))/totVar
n <- nrow(nhimp$data)
k <- length(coef(lm(chl~bmi,data = complete(nhimp,1))))
nu_old <- (m-1)/lambda^2
nu_com <- n-k
nu_obs <- (nu_com+1)/(nu_com+3)*nu_com*(1-lambda)
(nu_BR <- (nu_old*nu_obs)/(nu_old+nu_obs))
# [1] 15.68225
nu_BR, the Barnard_Rubin adjusted degrees of freedom, matches up with the degrees of freedom for the bmi variable yielded from the the summary(pool(fit)) call above: 15.68225. So we can pass this value into degrees of freedom argument in the pt() function in order to obtain the two-tailed p-value for the imputed model.
pt(q = pooledMean / pooledSE, df = nu_BR, lower.tail = FALSE) * 2
# [1] 0.2126945
And this manually calculated p-value now matches the p-value from the mice function output. | Calculating pooled p-values manually | This is for anyone who is interested, after reading pp. 37-43 in Flexible Imputation of Missing Data by Stef van Buuren. If we call the adjusted degrees of freedom nu
m <- nrow(mat)
lambda <- (bet | Calculating pooled p-values manually
This is for anyone who is interested, after reading pp. 37-43 in Flexible Imputation of Missing Data by Stef van Buuren. If we call the adjusted degrees of freedom nu
m <- nrow(mat)
lambda <- (betweenVar + (betweenVar/m))/totVar
n <- nrow(nhimp$data)
k <- length(coef(lm(chl~bmi,data = complete(nhimp,1))))
nu_old <- (m-1)/lambda^2
nu_com <- n-k
nu_obs <- (nu_com+1)/(nu_com+3)*nu_com*(1-lambda)
(nu_BR <- (nu_old*nu_obs)/(nu_old+nu_obs))
# [1] 15.68225
nu_BR, the Barnard_Rubin adjusted degrees of freedom, matches up with the degrees of freedom for the bmi variable yielded from the the summary(pool(fit)) call above: 15.68225. So we can pass this value into degrees of freedom argument in the pt() function in order to obtain the two-tailed p-value for the imputed model.
pt(q = pooledMean / pooledSE, df = nu_BR, lower.tail = FALSE) * 2
# [1] 0.2126945
And this manually calculated p-value now matches the p-value from the mice function output. | Calculating pooled p-values manually
This is for anyone who is interested, after reading pp. 37-43 in Flexible Imputation of Missing Data by Stef van Buuren. If we call the adjusted degrees of freedom nu
m <- nrow(mat)
lambda <- (bet |
55,803 | Visualizing model fit for multidimensional data | I think a good approach in your case could be to
Fit the multivariate GP model on a few training points, as you do now
Take advantage of the fact you have the ground truth function in order to generate true values and predicted values for a range of inputs.
Plot comparisons of the "marginal" and "joint" outputs for these ranges of values.
Preparing 2-D inputs as a Matlab-style meshgrid:
delta = 0.025
x = np.arange(-1, +1, delta)
y = np.arange(-1, +1, delta)
X, Y = np.meshgrid(x, y)
Generating predictions from the fitted GP model for all the combinations of 2-D X inputs, and then separating the 2-D outputs into individual arrays for later use:
test = np.stack([np.ravel(X), np.ravel(Y)], axis=1)
y_pred, sigma = gp.predict(test, return_std=True)
y_pred_fromX = y_pred[:,0].reshape(X.shape)
y_pred_fromY = y_pred[:,1].reshape(X.shape)
For the first dimension of the 2-D output, we plot the actual & predicted values as contours, with the axes representing the 2-D inputs:
import matplotlib.pyplot as plt
plt.figure(figsize=(10,5))
plt.subplot(121)
plt.contour(X, Y, np.sin(X), 20)
plt.title('1st dim: True')
plt.subplot(122)
plt.contour(X, Y, y_pred_fromX, 20)
plt.title('1st dim: Predicted')
Same for the second dimension of the 2-D output:
plt.figure(figsize=(10,5))
plt.subplot(121)
plt.contour(X, Y, np.sin(Y), 20)
plt.title('2nd dim: True')
plt.subplot(122)
plt.contour(X, Y, y_pred_fromY, 20)
plt.title('2nd dim: Predicted')
Focussing on the 2-D output alone, scatterplots of joint occurences are not particularly helpful. Here the axes are the 2-D output values:
plt.figure(figsize=(10,5))
plt.subplot(121)
plt.scatter(np.sin(X), np.sin(Y))
plt.title('True: scatterplot')
plt.subplot(122)
plt.scatter(y_pred_fromX, y_pred_fromY)
plt.title('Predicted: scatterplot')
But Seaborn's jointplots are much more useful. Once again, axes are 2-D output values, and the plot represents a calculated density:
import seaborn as sns
plt.figure()
sns.jointplot(x=np.sin(X), y=np.sin(Y), kind='kde')
plt.title('True: jointplot')
plt.figure()
sns.jointplot(x=y_pred_fromX, y=y_pred_fromY, kind='kde')
plt.title('Predicted: jointplot') | Visualizing model fit for multidimensional data | I think a good approach in your case could be to
Fit the multivariate GP model on a few training points, as you do now
Take advantage of the fact you have the ground truth function in order to gener | Visualizing model fit for multidimensional data
I think a good approach in your case could be to
Fit the multivariate GP model on a few training points, as you do now
Take advantage of the fact you have the ground truth function in order to generate true values and predicted values for a range of inputs.
Plot comparisons of the "marginal" and "joint" outputs for these ranges of values.
Preparing 2-D inputs as a Matlab-style meshgrid:
delta = 0.025
x = np.arange(-1, +1, delta)
y = np.arange(-1, +1, delta)
X, Y = np.meshgrid(x, y)
Generating predictions from the fitted GP model for all the combinations of 2-D X inputs, and then separating the 2-D outputs into individual arrays for later use:
test = np.stack([np.ravel(X), np.ravel(Y)], axis=1)
y_pred, sigma = gp.predict(test, return_std=True)
y_pred_fromX = y_pred[:,0].reshape(X.shape)
y_pred_fromY = y_pred[:,1].reshape(X.shape)
For the first dimension of the 2-D output, we plot the actual & predicted values as contours, with the axes representing the 2-D inputs:
import matplotlib.pyplot as plt
plt.figure(figsize=(10,5))
plt.subplot(121)
plt.contour(X, Y, np.sin(X), 20)
plt.title('1st dim: True')
plt.subplot(122)
plt.contour(X, Y, y_pred_fromX, 20)
plt.title('1st dim: Predicted')
Same for the second dimension of the 2-D output:
plt.figure(figsize=(10,5))
plt.subplot(121)
plt.contour(X, Y, np.sin(Y), 20)
plt.title('2nd dim: True')
plt.subplot(122)
plt.contour(X, Y, y_pred_fromY, 20)
plt.title('2nd dim: Predicted')
Focussing on the 2-D output alone, scatterplots of joint occurences are not particularly helpful. Here the axes are the 2-D output values:
plt.figure(figsize=(10,5))
plt.subplot(121)
plt.scatter(np.sin(X), np.sin(Y))
plt.title('True: scatterplot')
plt.subplot(122)
plt.scatter(y_pred_fromX, y_pred_fromY)
plt.title('Predicted: scatterplot')
But Seaborn's jointplots are much more useful. Once again, axes are 2-D output values, and the plot represents a calculated density:
import seaborn as sns
plt.figure()
sns.jointplot(x=np.sin(X), y=np.sin(Y), kind='kde')
plt.title('True: jointplot')
plt.figure()
sns.jointplot(x=y_pred_fromX, y=y_pred_fromY, kind='kde')
plt.title('Predicted: jointplot') | Visualizing model fit for multidimensional data
I think a good approach in your case could be to
Fit the multivariate GP model on a few training points, as you do now
Take advantage of the fact you have the ground truth function in order to gener |
55,804 | How to un-transform exponential plot data to get back to original data scale? | You are going about this modelling exercise from the wrong direction. You are transforming x which is causing among other things trouble with big values. Instead you could transform y.
Anyway, the reason your attempts are failing is because you are trying to apply the back transformation to the fitted values, but you transformed the x variable before fitting the model.
In this instance you don't need to do anything to the fitted values. And if we plot y against exp(x/1000) you'll also see that the transformation failed to do anything of interest
err <- 0.5*rnorm(101)
x <- seq(from=500, to=1000, by = 5)
y <- exp(.005*x) + err
mydata <- data.frame(x,y, expx = exp(x / 1000))
theme_set(theme_bw())
ggplot(mydata, aes(x = expx, y = y)) + geom_point()
So all your transformation achieved is a rescaling of x — the relationship wasn't linearised at all. If you proceed, you'll just fit a straight line to the non-linear relationship. Let's do that, because it shows that you don't need to fiddle with x at all if you fit the model as you did:
myfit1 <- lm(y ~ exp(x/1000), data = mydata)
newd <- data.frame(x = seq(500, 1000, by = 1))
newd <- transform(newd, Fitted = predict(myfit1, newd),
expx = exp(x / 1000))
ggplot() +
geom_point(aes(x = x, y = y), mydata) +
geom_line(aes(x = x, y = Fitted), newd, size = 1)
The plot is the same, except for the labelling on the x-axis, if we plot on the exp(x/1000) scale
ggplot() +
geom_point(aes(x = expx, y = y), mydata) +
geom_line(aes(x = expx, y = Fitted), newd, size = 1)
What you can do instead is transform y to linearise the relationship
myfit2 <- lm(log(y) ~ x, data = mydata)
newd <- transform(newd, Fitted2 = exp(predict(myfit2, newd)))
ggplot() +
geom_point(aes(x = x, y = y), mydata) +
geom_line(aes(x = x, y = Fitted), newd, size = 1) +
geom_line(aes(x = x, y = Fitted2), newd, size = 1, colour = "red")
Which now does a much better job of fitting the data.
The basic point here is that if you transform x you don't need to transform y.
Finally, following Mosteller and Tukey's bulging rule, for a relationship seen in your data you could transform y via a sqrt or log transform, or transform x by squaring or cubing it say. So by that rule of thumb you weren't choosing a useful transformation. In this case, we can roughly linearise the relationship by applying the following transformation
$$x^{\prime} = (x/1000)^5$$
(the division by 1000 is there just to avoid very large values of x). A plot of y against the thusly transformed x is shown below along with the regression fit
myfit3 <- lm(y ~ I((x/1000)^5), data = mydata)
newd <- transform(newd, Fitted3 = predict(myfit3, newd))
ggplot() +
geom_point(aes(x = x, y = y), mydata) +
geom_line(aes(x = x, y = Fitted3), newd, size = 1, col = "red")
What transformation you choose should however be informed by the system you are studying. The log transform of y works better here because that is how the data were generated. | How to un-transform exponential plot data to get back to original data scale? | You are going about this modelling exercise from the wrong direction. You are transforming x which is causing among other things trouble with big values. Instead you could transform y.
Anyway, the rea | How to un-transform exponential plot data to get back to original data scale?
You are going about this modelling exercise from the wrong direction. You are transforming x which is causing among other things trouble with big values. Instead you could transform y.
Anyway, the reason your attempts are failing is because you are trying to apply the back transformation to the fitted values, but you transformed the x variable before fitting the model.
In this instance you don't need to do anything to the fitted values. And if we plot y against exp(x/1000) you'll also see that the transformation failed to do anything of interest
err <- 0.5*rnorm(101)
x <- seq(from=500, to=1000, by = 5)
y <- exp(.005*x) + err
mydata <- data.frame(x,y, expx = exp(x / 1000))
theme_set(theme_bw())
ggplot(mydata, aes(x = expx, y = y)) + geom_point()
So all your transformation achieved is a rescaling of x — the relationship wasn't linearised at all. If you proceed, you'll just fit a straight line to the non-linear relationship. Let's do that, because it shows that you don't need to fiddle with x at all if you fit the model as you did:
myfit1 <- lm(y ~ exp(x/1000), data = mydata)
newd <- data.frame(x = seq(500, 1000, by = 1))
newd <- transform(newd, Fitted = predict(myfit1, newd),
expx = exp(x / 1000))
ggplot() +
geom_point(aes(x = x, y = y), mydata) +
geom_line(aes(x = x, y = Fitted), newd, size = 1)
The plot is the same, except for the labelling on the x-axis, if we plot on the exp(x/1000) scale
ggplot() +
geom_point(aes(x = expx, y = y), mydata) +
geom_line(aes(x = expx, y = Fitted), newd, size = 1)
What you can do instead is transform y to linearise the relationship
myfit2 <- lm(log(y) ~ x, data = mydata)
newd <- transform(newd, Fitted2 = exp(predict(myfit2, newd)))
ggplot() +
geom_point(aes(x = x, y = y), mydata) +
geom_line(aes(x = x, y = Fitted), newd, size = 1) +
geom_line(aes(x = x, y = Fitted2), newd, size = 1, colour = "red")
Which now does a much better job of fitting the data.
The basic point here is that if you transform x you don't need to transform y.
Finally, following Mosteller and Tukey's bulging rule, for a relationship seen in your data you could transform y via a sqrt or log transform, or transform x by squaring or cubing it say. So by that rule of thumb you weren't choosing a useful transformation. In this case, we can roughly linearise the relationship by applying the following transformation
$$x^{\prime} = (x/1000)^5$$
(the division by 1000 is there just to avoid very large values of x). A plot of y against the thusly transformed x is shown below along with the regression fit
myfit3 <- lm(y ~ I((x/1000)^5), data = mydata)
newd <- transform(newd, Fitted3 = predict(myfit3, newd))
ggplot() +
geom_point(aes(x = x, y = y), mydata) +
geom_line(aes(x = x, y = Fitted3), newd, size = 1, col = "red")
What transformation you choose should however be informed by the system you are studying. The log transform of y works better here because that is how the data were generated. | How to un-transform exponential plot data to get back to original data scale?
You are going about this modelling exercise from the wrong direction. You are transforming x which is causing among other things trouble with big values. Instead you could transform y.
Anyway, the rea |
55,805 | Proof that $E(|X_1 - X_2|)$ is bound by twice the mean | as pointed out in the comments by @zhanxiong, the triangle inequality is sufficient here, take:
$|X_1 -X_2| \leq |X_1| +|X_2|$ and take expectations to get
$\mathbb{E}(|X_1 -X_2|) \leq \mathbb{E}(|X_1|) +\mathbb{E}(|X_2|)$. However, you cannot equate the two marginal expectations to be the same value without assuming they have the same marginal distributions. Assuming this you can conclude $\mathbb{E}(|X_1 -X_2|) \leq 2\mathbb{E}(|X_1|)$ and the non-negative assumption allows you to say that $\mathbb{E}(|X_1|) = \mathbb{E}(X_1)$. So that if $\mu=\mathbb{E}(X_1)$ then you have:
$\mathbb{E}(|X_1 -X_2|) \leq 2\mu$.
As mentioned by @zhanxiong in the comments this doesn't use independence but does use that the marginals have the same distributions.
As @whuber points out in a comment this is not possible if the variables could be negative. Here is a simple example showing what happens if they are negative expectations; let $X_i \sim Rademacher(p=1/3)$ where $\Pr(X_i=1)=1/3$ and $\Pr(X_i=-1)=2/3$. The expectation of each is $\mathbb{E}(X_i) =-1/3$. Then with the i.i.d. assumption we can calculate the four probabilities:
$\Pr(X_1=1, X_2=1) = 1/9$ and $|1-1| = 0$
$\Pr(X_1=1, X_2=-1) = 2/9$ and $|1-(-1)| =2$
$\Pr(X_1=-1, X_2=1) = 2/9$ and $|-1-1| =2$
$\Pr(X_1=-1, X_2=-1) = 4/9$ and $|-1-(-1)|=0$
Multiplying the probabilities of each event (l.h.s.) with the values on the r.h.s. and summing all four numbers gives you $\mathbb{E}(|X_1 -X_2|)=8/9$ and twice the expectation is: $2\mathbb{E}(X_i) = -2/3$. Clearly, $8/9 \nleq -2/3$ so the claim needs additional conditions/assumptions (non-negativity) to be true. | Proof that $E(|X_1 - X_2|)$ is bound by twice the mean | as pointed out in the comments by @zhanxiong, the triangle inequality is sufficient here, take:
$|X_1 -X_2| \leq |X_1| +|X_2|$ and take expectations to get
$\mathbb{E}(|X_1 -X_2|) \leq \mathbb{E}(|X_1 | Proof that $E(|X_1 - X_2|)$ is bound by twice the mean
as pointed out in the comments by @zhanxiong, the triangle inequality is sufficient here, take:
$|X_1 -X_2| \leq |X_1| +|X_2|$ and take expectations to get
$\mathbb{E}(|X_1 -X_2|) \leq \mathbb{E}(|X_1|) +\mathbb{E}(|X_2|)$. However, you cannot equate the two marginal expectations to be the same value without assuming they have the same marginal distributions. Assuming this you can conclude $\mathbb{E}(|X_1 -X_2|) \leq 2\mathbb{E}(|X_1|)$ and the non-negative assumption allows you to say that $\mathbb{E}(|X_1|) = \mathbb{E}(X_1)$. So that if $\mu=\mathbb{E}(X_1)$ then you have:
$\mathbb{E}(|X_1 -X_2|) \leq 2\mu$.
As mentioned by @zhanxiong in the comments this doesn't use independence but does use that the marginals have the same distributions.
As @whuber points out in a comment this is not possible if the variables could be negative. Here is a simple example showing what happens if they are negative expectations; let $X_i \sim Rademacher(p=1/3)$ where $\Pr(X_i=1)=1/3$ and $\Pr(X_i=-1)=2/3$. The expectation of each is $\mathbb{E}(X_i) =-1/3$. Then with the i.i.d. assumption we can calculate the four probabilities:
$\Pr(X_1=1, X_2=1) = 1/9$ and $|1-1| = 0$
$\Pr(X_1=1, X_2=-1) = 2/9$ and $|1-(-1)| =2$
$\Pr(X_1=-1, X_2=1) = 2/9$ and $|-1-1| =2$
$\Pr(X_1=-1, X_2=-1) = 4/9$ and $|-1-(-1)|=0$
Multiplying the probabilities of each event (l.h.s.) with the values on the r.h.s. and summing all four numbers gives you $\mathbb{E}(|X_1 -X_2|)=8/9$ and twice the expectation is: $2\mathbb{E}(X_i) = -2/3$. Clearly, $8/9 \nleq -2/3$ so the claim needs additional conditions/assumptions (non-negativity) to be true. | Proof that $E(|X_1 - X_2|)$ is bound by twice the mean
as pointed out in the comments by @zhanxiong, the triangle inequality is sufficient here, take:
$|X_1 -X_2| \leq |X_1| +|X_2|$ and take expectations to get
$\mathbb{E}(|X_1 -X_2|) \leq \mathbb{E}(|X_1 |
55,806 | Calculate standard deviation from sample size, mean, and confidence interval? | The standard deviation for percentage/proportion is:
\begin{align}
\sigma &= \sqrt{p(1-p)} \\[5pt]
&= \sqrt{0.642(1-0.642)} \\[5pt]
&= 0.4792
\end{align}
Thus when given a percentage, you can directly find the std deviation.
For back tracking, we know, $CI = p \pm z \frac{\sigma}{\sqrt{N}}$
For 95%, $z = 1.96$, N = 427, $p=0.642$
$\sigma = ?$
Thus use the above formula and back substitute.
If your sample size is less than 30 (N<30), you have to use a t-value instead of Z-value (t-value calculator). The t-value has degrees of freedom $df = N-1$ and ${\rm prob} = (1-\alpha)/2$.
Thus the formula is: $CI = p \pm t_{(N-1)} \frac{\sigma}{\sqrt{N}}$ | Calculate standard deviation from sample size, mean, and confidence interval? | The standard deviation for percentage/proportion is:
\begin{align}
\sigma &= \sqrt{p(1-p)} \\[5pt]
&= \sqrt{0.642(1-0.642)} \\[5pt]
&= 0.4792
\end{align}
Thus when given a percentage, you can dire | Calculate standard deviation from sample size, mean, and confidence interval?
The standard deviation for percentage/proportion is:
\begin{align}
\sigma &= \sqrt{p(1-p)} \\[5pt]
&= \sqrt{0.642(1-0.642)} \\[5pt]
&= 0.4792
\end{align}
Thus when given a percentage, you can directly find the std deviation.
For back tracking, we know, $CI = p \pm z \frac{\sigma}{\sqrt{N}}$
For 95%, $z = 1.96$, N = 427, $p=0.642$
$\sigma = ?$
Thus use the above formula and back substitute.
If your sample size is less than 30 (N<30), you have to use a t-value instead of Z-value (t-value calculator). The t-value has degrees of freedom $df = N-1$ and ${\rm prob} = (1-\alpha)/2$.
Thus the formula is: $CI = p \pm t_{(N-1)} \frac{\sigma}{\sqrt{N}}$ | Calculate standard deviation from sample size, mean, and confidence interval?
The standard deviation for percentage/proportion is:
\begin{align}
\sigma &= \sqrt{p(1-p)} \\[5pt]
&= \sqrt{0.642(1-0.642)} \\[5pt]
&= 0.4792
\end{align}
Thus when given a percentage, you can dire |
55,807 | Calculate standard deviation from sample size, mean, and confidence interval? | From the description you provided, your first question is about the distribution of people's age. Normal (i.e. Gaussian) distribution applies to such kind of applications.
It will be helpful if you know how the confidence interval (CI) was calculated, because there are many different possible ways that the CI was calculated. For instance, if the distribution is of normal distribution, and the CI was calculated using t-test, then the SD can be estimated with following equation:
SD = sqrt(n)*(ci_upper - ci_lower)/(2 * tinv((1-CL)/2; n-1)),
where CL is the confidence level, ‘ci_upper’ and ‘ci_lower’ are the upper and lower limits of CI respectively, and 'tinv()' is the inverse of Student's T cdf.
Otherwise, if it is of normal distribution, but a known SD was used in calculating CI, then the SD can be calculated with following equation:
SD = sqrt(n)*(ci_upper - ci_lower)/(sqrt(8) * erfinv(CL)),
where 'erfinv ()' is the inverse error function.
Your second question is about the distribution of people's sex (i.e. male or female). From the data you provided, it sounds that there are k=274 males among n=427 of whole samples. Bernoulli distribution applies to this application. In this case, the variance (of male's population) = p*(1-p) = 0.2299, and SD = sqrt(0.2299) = 0.4795, where p is the mean value. Note that "valiance = mean*(1-mean)" is applicable to Bernoulli distribution only. | Calculate standard deviation from sample size, mean, and confidence interval? | From the description you provided, your first question is about the distribution of people's age. Normal (i.e. Gaussian) distribution applies to such kind of applications.
It will be helpful if you kn | Calculate standard deviation from sample size, mean, and confidence interval?
From the description you provided, your first question is about the distribution of people's age. Normal (i.e. Gaussian) distribution applies to such kind of applications.
It will be helpful if you know how the confidence interval (CI) was calculated, because there are many different possible ways that the CI was calculated. For instance, if the distribution is of normal distribution, and the CI was calculated using t-test, then the SD can be estimated with following equation:
SD = sqrt(n)*(ci_upper - ci_lower)/(2 * tinv((1-CL)/2; n-1)),
where CL is the confidence level, ‘ci_upper’ and ‘ci_lower’ are the upper and lower limits of CI respectively, and 'tinv()' is the inverse of Student's T cdf.
Otherwise, if it is of normal distribution, but a known SD was used in calculating CI, then the SD can be calculated with following equation:
SD = sqrt(n)*(ci_upper - ci_lower)/(sqrt(8) * erfinv(CL)),
where 'erfinv ()' is the inverse error function.
Your second question is about the distribution of people's sex (i.e. male or female). From the data you provided, it sounds that there are k=274 males among n=427 of whole samples. Bernoulli distribution applies to this application. In this case, the variance (of male's population) = p*(1-p) = 0.2299, and SD = sqrt(0.2299) = 0.4795, where p is the mean value. Note that "valiance = mean*(1-mean)" is applicable to Bernoulli distribution only. | Calculate standard deviation from sample size, mean, and confidence interval?
From the description you provided, your first question is about the distribution of people's age. Normal (i.e. Gaussian) distribution applies to such kind of applications.
It will be helpful if you kn |
55,808 | Calculate standard deviation from sample size, mean, and confidence interval? | A bit late to the party, but I noticed that the second part of the question was not fully addressed - "can it be apply to percentage measure"?
Following the OPs comment, I am assuming that by "percentage measure" we are referring to some binary outcome (Male/Female, Right handed/Left handed etc.).
In that case the variables are described by a discrete probability distribution, whereas the age is a continuous variable and is described by a continuous probability distribution. A common choice for the distribution of binary variables is the binomial distribution. Confidence intervals for the binomial can be constructed in different ways (wiki). The original study should have described how they derived those confidence intervals.
Note that you can still use the formula provided by user3808268 to get the "standard deviation", but it would be difficult to meaningfully interpret it. | Calculate standard deviation from sample size, mean, and confidence interval? | A bit late to the party, but I noticed that the second part of the question was not fully addressed - "can it be apply to percentage measure"?
Following the OPs comment, I am assuming that by "percen | Calculate standard deviation from sample size, mean, and confidence interval?
A bit late to the party, but I noticed that the second part of the question was not fully addressed - "can it be apply to percentage measure"?
Following the OPs comment, I am assuming that by "percentage measure" we are referring to some binary outcome (Male/Female, Right handed/Left handed etc.).
In that case the variables are described by a discrete probability distribution, whereas the age is a continuous variable and is described by a continuous probability distribution. A common choice for the distribution of binary variables is the binomial distribution. Confidence intervals for the binomial can be constructed in different ways (wiki). The original study should have described how they derived those confidence intervals.
Note that you can still use the formula provided by user3808268 to get the "standard deviation", but it would be difficult to meaningfully interpret it. | Calculate standard deviation from sample size, mean, and confidence interval?
A bit late to the party, but I noticed that the second part of the question was not fully addressed - "can it be apply to percentage measure"?
Following the OPs comment, I am assuming that by "percen |
55,809 | How can't the Softmax layer never converge using hard targets | The wording never converge may sound a bit too strong, but the actual statement is
... the softmax can never predict a probability of exactly $0$ or exactly $1$, ...
This is certainly true in almost all cases. In this context, a convergence means fitting the training data perfectly and outputting a one-hot vector of probabilities for all inputs $x$. In all other cases, there will be some loss, which basically means the algorithm hasn't converged yet. And this is exactly what happens in practice: usually, the learning stops either when the researches sees no improvements in training or the time limit expires.
By the way, the quote is taken from the chapter on Regularization, and there authors explain that fitting the training data perfectly is a bad idea and injecting noise to the learning process actually improves generalization. | How can't the Softmax layer never converge using hard targets | The wording never converge may sound a bit too strong, but the actual statement is
... the softmax can never predict a probability of exactly $0$ or exactly $1$, ...
This is certainly true in almost | How can't the Softmax layer never converge using hard targets
The wording never converge may sound a bit too strong, but the actual statement is
... the softmax can never predict a probability of exactly $0$ or exactly $1$, ...
This is certainly true in almost all cases. In this context, a convergence means fitting the training data perfectly and outputting a one-hot vector of probabilities for all inputs $x$. In all other cases, there will be some loss, which basically means the algorithm hasn't converged yet. And this is exactly what happens in practice: usually, the learning stops either when the researches sees no improvements in training or the time limit expires.
By the way, the quote is taken from the chapter on Regularization, and there authors explain that fitting the training data perfectly is a bad idea and injecting noise to the learning process actually improves generalization. | How can't the Softmax layer never converge using hard targets
The wording never converge may sound a bit too strong, but the actual statement is
... the softmax can never predict a probability of exactly $0$ or exactly $1$, ...
This is certainly true in almost |
55,810 | How can't the Softmax layer never converge using hard targets | why don't we face that in practice?
The predictions for multiclass classification are done by taking argmax over probability vector, so this is not really an issue.
Do frameworks "terminate" the gradient descent algorithm earlier and handle this issue internally?
In deep learning usually you don't have any convergence guarantee, so most frameworks just assume you specify number of iterations, or tolerance (for example you stop if log-loss changes less than $\epsilon$ between iterations).
There is also the problem with being too confident, but Maxim covered that. For a concrete example you can see this paper. | How can't the Softmax layer never converge using hard targets | why don't we face that in practice?
The predictions for multiclass classification are done by taking argmax over probability vector, so this is not really an issue.
Do frameworks "terminate" the gr | How can't the Softmax layer never converge using hard targets
why don't we face that in practice?
The predictions for multiclass classification are done by taking argmax over probability vector, so this is not really an issue.
Do frameworks "terminate" the gradient descent algorithm earlier and handle this issue internally?
In deep learning usually you don't have any convergence guarantee, so most frameworks just assume you specify number of iterations, or tolerance (for example you stop if log-loss changes less than $\epsilon$ between iterations).
There is also the problem with being too confident, but Maxim covered that. For a concrete example you can see this paper. | How can't the Softmax layer never converge using hard targets
why don't we face that in practice?
The predictions for multiclass classification are done by taking argmax over probability vector, so this is not really an issue.
Do frameworks "terminate" the gr |
55,811 | How to deal when you have too many outliers? | These are not outliers. I am an economist and this is the way the data should look, based on your comments. It is a poor dataset to start a beginner on.
What you are looking at is called "price discrimination." In particular, it is third degree price discrimination. Another real world example, although it is an example of first degree price discrimination, is with the Apple i-phone. When it first came out they restricted production. As a consequence, the supply curve and the demand curve did not meet. Only those who valued it the most tried to buy it and they were willing to pay the most. Then they produced more, but still not enough for the supply curve and the demand curve to meet. People stood in line and those willing to pay the most got a phone. They continued this process until the price fell to the equilibrium price.
In doing this, they extracted as much revenue as possible from each person. There is a hidden structure in this data that you need to extract. It probably had to do with square footage, amenities and location. You do need to go and ask a new question as this won't get you where you are looking to go. The data has no outliers in it.
Without really looking at it closely, it is probably a Pareto distribution and not all Pareto distributions even have a mean, let along the nice properties you want a beginner to see. | How to deal when you have too many outliers? | These are not outliers. I am an economist and this is the way the data should look, based on your comments. It is a poor dataset to start a beginner on.
What you are looking at is called "price disc | How to deal when you have too many outliers?
These are not outliers. I am an economist and this is the way the data should look, based on your comments. It is a poor dataset to start a beginner on.
What you are looking at is called "price discrimination." In particular, it is third degree price discrimination. Another real world example, although it is an example of first degree price discrimination, is with the Apple i-phone. When it first came out they restricted production. As a consequence, the supply curve and the demand curve did not meet. Only those who valued it the most tried to buy it and they were willing to pay the most. Then they produced more, but still not enough for the supply curve and the demand curve to meet. People stood in line and those willing to pay the most got a phone. They continued this process until the price fell to the equilibrium price.
In doing this, they extracted as much revenue as possible from each person. There is a hidden structure in this data that you need to extract. It probably had to do with square footage, amenities and location. You do need to go and ask a new question as this won't get you where you are looking to go. The data has no outliers in it.
Without really looking at it closely, it is probably a Pareto distribution and not all Pareto distributions even have a mean, let along the nice properties you want a beginner to see. | How to deal when you have too many outliers?
These are not outliers. I am an economist and this is the way the data should look, based on your comments. It is a poor dataset to start a beginner on.
What you are looking at is called "price disc |
55,812 | Noncentral chi² with a noncentral chi² noncentrality parameter | Let's do it with characteristic functions. We'll start out with the definition of the characteristic function for an arbitrary distribution $F(x)$:
$$\phi_x(it) = \int e^{itx}dF(x)$$
The ch.f. of a noncentral $\chi^2(\nu, \Lambda)$ is:
$$\phi_{X|\Lambda}(it) = \frac{\exp\{\frac{it\Lambda}{1-2it}\}}{(1-2it)^{\nu/2}}$$
In this case, this is the ch.f. of $X|\Lambda$. Now, if we have an arbitrary distribution of $\Lambda$, label it $F$, we can find the ch.f. of $X$ by integrating out $\Lambda$ from the ch.f. of $X|\Lambda$. Note, however, that in this case the ch.f. of $X|\Lambda$ can be rearranged as:
$$\phi_{X|\Lambda}(it) = \frac{1}{(1-2it)^{\nu/2}}\exp\{[it/(1-2it)]\Lambda\}
$$
Looking carefully at the $\exp$ term and comparing to our initial expression for the characteristic function, we can see that integrating out $\Lambda$ is essentially the same integral as required to find the ch.f. of $F$, but with $it$ replaced by $it/(1-2it)$. Consequently, we can see that:
$$\phi_X(it) = \frac{\phi_{\Lambda}(it/(1-2it))}{(1-2it)^{\nu/2}}$$
Substituting the appropriately-parameterized ch.f. of $\Lambda$ gives us:
$$\phi_X(it) = \frac{\exp\left\{\frac{2it\theta}{(1-2it)\left(1-\frac{2it}{1-2it}\right)}\right\}}{(1-2it)^{\nu/2}(1-\frac{2it}{1-2it})^{\nu/2}}$$
We rewrite the term $(1-2it/(1-2it))$ as $(1-4it)/(1-2it)$ by replacing the leading "1" with $(1-2it)/(1-2it)$ and working through the resultant algebra. Substituting gives us the resulting mess:
$$\phi_X(it) = \frac{\exp\left\{\frac{2it\theta}{(1-2it)(1-4it)/(1-2it)}\right\}}{(1-2it)^{\nu/2}(1-4it)^{\nu/2}/(1-2it)^{\nu/2}}$$
The obvious cancellations result in:
$$\phi_X(it) = \frac{\exp\left\{\frac{2it\theta}{(1-4it)}\right\}}{(1-4it)^{\nu/2}}$$
Almost there! What we have is the characteristic function of $X$; what we want is the characteristic function of $X/2$. A basic property of characteristic functions is that the ch.f. of $aX$ is the same as the ch.f. of $X$ with $ait$ substituted everywhere for $it$. (If you look at the definition of the characteristic function at the beginning of the answer, you may be able to see why this is so.) Substituting $it/2$ everywhere for $it$ and performing the resulting divisions $(2/2)$ and $(4/2)$ gives us:
$$\phi_{X/2}(it) = \frac{\exp\left\{\frac{it\theta}{(1-2it)}\right\}}{(1-2it)^{\nu/2}}$$
which, by comparing to our initial characteristic function for $X|\Lambda$, we can see is the characteristic function of a noncentral $\chi^2(\nu, \theta)$ distribution. | Noncentral chi² with a noncentral chi² noncentrality parameter | Let's do it with characteristic functions. We'll start out with the definition of the characteristic function for an arbitrary distribution $F(x)$:
$$\phi_x(it) = \int e^{itx}dF(x)$$
The ch.f. of a n | Noncentral chi² with a noncentral chi² noncentrality parameter
Let's do it with characteristic functions. We'll start out with the definition of the characteristic function for an arbitrary distribution $F(x)$:
$$\phi_x(it) = \int e^{itx}dF(x)$$
The ch.f. of a noncentral $\chi^2(\nu, \Lambda)$ is:
$$\phi_{X|\Lambda}(it) = \frac{\exp\{\frac{it\Lambda}{1-2it}\}}{(1-2it)^{\nu/2}}$$
In this case, this is the ch.f. of $X|\Lambda$. Now, if we have an arbitrary distribution of $\Lambda$, label it $F$, we can find the ch.f. of $X$ by integrating out $\Lambda$ from the ch.f. of $X|\Lambda$. Note, however, that in this case the ch.f. of $X|\Lambda$ can be rearranged as:
$$\phi_{X|\Lambda}(it) = \frac{1}{(1-2it)^{\nu/2}}\exp\{[it/(1-2it)]\Lambda\}
$$
Looking carefully at the $\exp$ term and comparing to our initial expression for the characteristic function, we can see that integrating out $\Lambda$ is essentially the same integral as required to find the ch.f. of $F$, but with $it$ replaced by $it/(1-2it)$. Consequently, we can see that:
$$\phi_X(it) = \frac{\phi_{\Lambda}(it/(1-2it))}{(1-2it)^{\nu/2}}$$
Substituting the appropriately-parameterized ch.f. of $\Lambda$ gives us:
$$\phi_X(it) = \frac{\exp\left\{\frac{2it\theta}{(1-2it)\left(1-\frac{2it}{1-2it}\right)}\right\}}{(1-2it)^{\nu/2}(1-\frac{2it}{1-2it})^{\nu/2}}$$
We rewrite the term $(1-2it/(1-2it))$ as $(1-4it)/(1-2it)$ by replacing the leading "1" with $(1-2it)/(1-2it)$ and working through the resultant algebra. Substituting gives us the resulting mess:
$$\phi_X(it) = \frac{\exp\left\{\frac{2it\theta}{(1-2it)(1-4it)/(1-2it)}\right\}}{(1-2it)^{\nu/2}(1-4it)^{\nu/2}/(1-2it)^{\nu/2}}$$
The obvious cancellations result in:
$$\phi_X(it) = \frac{\exp\left\{\frac{2it\theta}{(1-4it)}\right\}}{(1-4it)^{\nu/2}}$$
Almost there! What we have is the characteristic function of $X$; what we want is the characteristic function of $X/2$. A basic property of characteristic functions is that the ch.f. of $aX$ is the same as the ch.f. of $X$ with $ait$ substituted everywhere for $it$. (If you look at the definition of the characteristic function at the beginning of the answer, you may be able to see why this is so.) Substituting $it/2$ everywhere for $it$ and performing the resulting divisions $(2/2)$ and $(4/2)$ gives us:
$$\phi_{X/2}(it) = \frac{\exp\left\{\frac{it\theta}{(1-2it)}\right\}}{(1-2it)^{\nu/2}}$$
which, by comparing to our initial characteristic function for $X|\Lambda$, we can see is the characteristic function of a noncentral $\chi^2(\nu, \theta)$ distribution. | Noncentral chi² with a noncentral chi² noncentrality parameter
Let's do it with characteristic functions. We'll start out with the definition of the characteristic function for an arbitrary distribution $F(x)$:
$$\phi_x(it) = \int e^{itx}dF(x)$$
The ch.f. of a n |
55,813 | Does exponential family of distributions have finite expected value? | As fairly well-explained on Wikipedia, for any exponential family, there exists a parameterisation such that the density of the family is$$f(x|\theta)=\exp\{\theta\cdot T(x)-\Psi(\theta)\}$$wrt a constant measure $\text{d}\mu(x)$, where the components of $T(\cdot)$ are linearly independent. In this representation, the moment generating function of the random variable $T(X)$ is given by$$M(\upsilon)=\exp\{\Psi(\theta+\upsilon)-\Psi(\theta)\}$$and therefore$$\Psi(\theta+\upsilon)-\Psi(\theta)$$ is the cumulant generating function for T. This implies that all order moments of $T(X)$ can be derived from the successive derivative of $\Psi(\theta)$, provided $\theta$ is within the interior of the natural parameter space$$\Theta=\{\theta;\ |A(\theta)|<\infty\}$$
However, if one is interested in the moments of $X$ itself, with density$$f(x|\theta)=\exp\{\theta\cdot T(x)-\Psi(\theta)\}$$there is no reason those moments are always well-defined. The generic reason is that the density of an arbitrary one-to-one transform $Y=\Xi(X)$ is then $$g(y|\theta)=\exp\{\theta\cdot (T\circ\Xi^{-1})(y)-\Psi(\theta)\}$$again the measure
$$\text{d}\xi(y) = \left| \frac{\text{d}\Xi{-1}}{\text{d}y}\right|\text{d}\mu(\Xi^{-1}(y)$$ Both $Y$ and $X$ thus share the same sufficient statistic in that $$(T\circ\Xi^{-1})(Y)=T(X)$$ as a random variable. The properties of this exponential family are thus characteristic of and characterised by the sufficient statistic, not $X$ or $Y$.
As exemplified by jbowman in a comment below, for the
$X\sim\text{Ga}(\alpha,\beta)$ distribution, the intrinsic
representation & sufficient statistic is $T(X)=\log X$. While $X$ has
all (positive) moments finites, $Y=\exp\{X\}$ does not. And as pointed
out by Glen_b, for the $X\sim\text{Pa}(\alpha,\beta)$
distribution [which is indeed an exponential family when the lower
bound $\beta$ is fixed], the moments are only defined up to the
$\alpha-1$ order. | Does exponential family of distributions have finite expected value? | As fairly well-explained on Wikipedia, for any exponential family, there exists a parameterisation such that the density of the family is$$f(x|\theta)=\exp\{\theta\cdot T(x)-\Psi(\theta)\}$$wrt a cons | Does exponential family of distributions have finite expected value?
As fairly well-explained on Wikipedia, for any exponential family, there exists a parameterisation such that the density of the family is$$f(x|\theta)=\exp\{\theta\cdot T(x)-\Psi(\theta)\}$$wrt a constant measure $\text{d}\mu(x)$, where the components of $T(\cdot)$ are linearly independent. In this representation, the moment generating function of the random variable $T(X)$ is given by$$M(\upsilon)=\exp\{\Psi(\theta+\upsilon)-\Psi(\theta)\}$$and therefore$$\Psi(\theta+\upsilon)-\Psi(\theta)$$ is the cumulant generating function for T. This implies that all order moments of $T(X)$ can be derived from the successive derivative of $\Psi(\theta)$, provided $\theta$ is within the interior of the natural parameter space$$\Theta=\{\theta;\ |A(\theta)|<\infty\}$$
However, if one is interested in the moments of $X$ itself, with density$$f(x|\theta)=\exp\{\theta\cdot T(x)-\Psi(\theta)\}$$there is no reason those moments are always well-defined. The generic reason is that the density of an arbitrary one-to-one transform $Y=\Xi(X)$ is then $$g(y|\theta)=\exp\{\theta\cdot (T\circ\Xi^{-1})(y)-\Psi(\theta)\}$$again the measure
$$\text{d}\xi(y) = \left| \frac{\text{d}\Xi{-1}}{\text{d}y}\right|\text{d}\mu(\Xi^{-1}(y)$$ Both $Y$ and $X$ thus share the same sufficient statistic in that $$(T\circ\Xi^{-1})(Y)=T(X)$$ as a random variable. The properties of this exponential family are thus characteristic of and characterised by the sufficient statistic, not $X$ or $Y$.
As exemplified by jbowman in a comment below, for the
$X\sim\text{Ga}(\alpha,\beta)$ distribution, the intrinsic
representation & sufficient statistic is $T(X)=\log X$. While $X$ has
all (positive) moments finites, $Y=\exp\{X\}$ does not. And as pointed
out by Glen_b, for the $X\sim\text{Pa}(\alpha,\beta)$
distribution [which is indeed an exponential family when the lower
bound $\beta$ is fixed], the moments are only defined up to the
$\alpha-1$ order. | Does exponential family of distributions have finite expected value?
As fairly well-explained on Wikipedia, for any exponential family, there exists a parameterisation such that the density of the family is$$f(x|\theta)=\exp\{\theta\cdot T(x)-\Psi(\theta)\}$$wrt a cons |
55,814 | Is there an equivalent to Fourier decomposition using normal distributions instead of sin / cos? | You might want to try out Gaussian Mixture models for your data.
For example, to decompose a mixture of $\mathcal{N}(10, 5), \mathcal{N}(22, 3)$, using flexmix package
library(flexmix)
set.seed(42)
m1 <- 10
m2 <- 22
sd1 <- 5
sd2 <- 3
N1 <- 1000
N2 <- 5000
D <- c(rnorm(mean = m1, sd = sd1, n = N1), rnorm(mean = m2, sd = sd2, n = N2))
kde <- density(D)
mix1 <- FLXMRglm(family = "gaussian")
mix2 <- FLXMRglm(family = "gaussian")
fit <- flexmix(D ~ 1, data = as.data.frame(D), k = 2, model = list(mix1, mix2))
component1 <- parameters(fit, component=1)[[1]]
component2 <- parameters(fit, component=2)[[1]]
m1.estimated <- component1[1]
sd1.estimated <- component1[2]
m2.estimated <- component2[1]
sd2.estimated <- component2[2]
weights <- table(clusters(fit))
plot(kde)
lines(kde$x, (weights[1]/sum(weights)*dnorm(mean = m1.estimated, sd =
sd1.estimated, x = kde$x)), col = "red", lwd = 2)
lines(kde$x, (weights[2]/sum(weights)*dnorm(mean = m2.estimated, sd =
sd2.estimated, x = kde$x)), col = "blue", lwd = 2) | Is there an equivalent to Fourier decomposition using normal distributions instead of sin / cos? | You might want to try out Gaussian Mixture models for your data.
For example, to decompose a mixture of $\mathcal{N}(10, 5), \mathcal{N}(22, 3)$, using flexmix package
library(flexmix)
set.seed(42)
| Is there an equivalent to Fourier decomposition using normal distributions instead of sin / cos?
You might want to try out Gaussian Mixture models for your data.
For example, to decompose a mixture of $\mathcal{N}(10, 5), \mathcal{N}(22, 3)$, using flexmix package
library(flexmix)
set.seed(42)
m1 <- 10
m2 <- 22
sd1 <- 5
sd2 <- 3
N1 <- 1000
N2 <- 5000
D <- c(rnorm(mean = m1, sd = sd1, n = N1), rnorm(mean = m2, sd = sd2, n = N2))
kde <- density(D)
mix1 <- FLXMRglm(family = "gaussian")
mix2 <- FLXMRglm(family = "gaussian")
fit <- flexmix(D ~ 1, data = as.data.frame(D), k = 2, model = list(mix1, mix2))
component1 <- parameters(fit, component=1)[[1]]
component2 <- parameters(fit, component=2)[[1]]
m1.estimated <- component1[1]
sd1.estimated <- component1[2]
m2.estimated <- component2[1]
sd2.estimated <- component2[2]
weights <- table(clusters(fit))
plot(kde)
lines(kde$x, (weights[1]/sum(weights)*dnorm(mean = m1.estimated, sd =
sd1.estimated, x = kde$x)), col = "red", lwd = 2)
lines(kde$x, (weights[2]/sum(weights)*dnorm(mean = m2.estimated, sd =
sd2.estimated, x = kde$x)), col = "blue", lwd = 2) | Is there an equivalent to Fourier decomposition using normal distributions instead of sin / cos?
You might want to try out Gaussian Mixture models for your data.
For example, to decompose a mixture of $\mathcal{N}(10, 5), \mathcal{N}(22, 3)$, using flexmix package
library(flexmix)
set.seed(42)
|
55,815 | Is there an equivalent to Fourier decomposition using normal distributions instead of sin / cos? | There are several important differences between sinusoidal functions are normal distributions. Sinusoidal functions provide an orthogonal basis for "nice" functions (I won't go into defining "nice" rigorously). So not only can every "nice" function be written as a linear combination, but the coefficients can be calculated independently as well, because functions with different frequencies are orthogonal. Normal functions are not orthogonal (they are never negative, so their product is never negative, so it's impossible for the dot product to have negative components, and thus cannot be zero). Furthermore, if you vary only the sigmas, they can't possibly be a basis for any set that contains functions that are not symmetric about the mean. If you vary the mean and variance, then you have a two dimensional space of functions to search over. This set will span the set of "nice" functions, but will in some sense not be linearly independent, and coefficients can't be obtained by simply taking the dot product of the function with gaussian functions, as they can with sinusoidal functions. Thus, while there are packages that approximate a given function with gaussians, they are more complicated than fourier packages, are non-linear, and generally ask you to specify beforehand how many gaussians you wish to use to approximate the given function. (Note that in rightskewed's answer, they tell the package that they're looking to decompose the function into two gaussians with the parameter k=2). | Is there an equivalent to Fourier decomposition using normal distributions instead of sin / cos? | There are several important differences between sinusoidal functions are normal distributions. Sinusoidal functions provide an orthogonal basis for "nice" functions (I won't go into defining "nice" ri | Is there an equivalent to Fourier decomposition using normal distributions instead of sin / cos?
There are several important differences between sinusoidal functions are normal distributions. Sinusoidal functions provide an orthogonal basis for "nice" functions (I won't go into defining "nice" rigorously). So not only can every "nice" function be written as a linear combination, but the coefficients can be calculated independently as well, because functions with different frequencies are orthogonal. Normal functions are not orthogonal (they are never negative, so their product is never negative, so it's impossible for the dot product to have negative components, and thus cannot be zero). Furthermore, if you vary only the sigmas, they can't possibly be a basis for any set that contains functions that are not symmetric about the mean. If you vary the mean and variance, then you have a two dimensional space of functions to search over. This set will span the set of "nice" functions, but will in some sense not be linearly independent, and coefficients can't be obtained by simply taking the dot product of the function with gaussian functions, as they can with sinusoidal functions. Thus, while there are packages that approximate a given function with gaussians, they are more complicated than fourier packages, are non-linear, and generally ask you to specify beforehand how many gaussians you wish to use to approximate the given function. (Note that in rightskewed's answer, they tell the package that they're looking to decompose the function into two gaussians with the parameter k=2). | Is there an equivalent to Fourier decomposition using normal distributions instead of sin / cos?
There are several important differences between sinusoidal functions are normal distributions. Sinusoidal functions provide an orthogonal basis for "nice" functions (I won't go into defining "nice" ri |
55,816 | How to perform a Binomial Test when having replicates? | Try fitting a binomial Generalized Linear Model - in R , if you have a dataframe called DF with numbers of successes (called "irregular") and failures ("regular") , and a column for treatment/group called Treat, with one Petri dish in each row, you can then do
Mod <- glm(data = DF, cbind(irregular,regular) ~ Treat, family = "binomial")
summary(Mod) #This prints the results, p.values and statistics.
exp(confint(Mod)) #This gives you the CIs for the different terms in the model | How to perform a Binomial Test when having replicates? | Try fitting a binomial Generalized Linear Model - in R , if you have a dataframe called DF with numbers of successes (called "irregular") and failures ("regular") , and a column for treatment/group ca | How to perform a Binomial Test when having replicates?
Try fitting a binomial Generalized Linear Model - in R , if you have a dataframe called DF with numbers of successes (called "irregular") and failures ("regular") , and a column for treatment/group called Treat, with one Petri dish in each row, you can then do
Mod <- glm(data = DF, cbind(irregular,regular) ~ Treat, family = "binomial")
summary(Mod) #This prints the results, p.values and statistics.
exp(confint(Mod)) #This gives you the CIs for the different terms in the model | How to perform a Binomial Test when having replicates?
Try fitting a binomial Generalized Linear Model - in R , if you have a dataframe called DF with numbers of successes (called "irregular") and failures ("regular") , and a column for treatment/group ca |
55,817 | How to perform a Binomial Test when having replicates? | There are many times in basic science (and other) research designs, where experiments are replicated and, at first glance, repeated measures seem appropriate. However, most procedures designed to handle data derived from non-independent units, such as the paired t-test, require more than one observation taken on the same experimental unit. A replicated experiment or one in which numerous observations are derived from the same organism or set of conditions are not usually measurements on identical experimental units, while shared conditions do create an appropriate environment for evaluation of clustered effects. While the presence of group or cluster effects could be considered, often these designs assume that observations (e.g. cells) taken from between replications (e.g. dishes or mice) are sufficiently homogeneous to ignore potential clustering effects. Ignoring the potential for replicate-level effects by lumping the numerous between-cluster observations into one group or treating the data as repeated measures set the stage for errors in estimation of true effects of exposure/treatment.
In the OP's case, the notion of using a logistic procedure to model a binary outcome is appropriate (a cell is either irregular or non-irregular). The idea of comparing proportions is also correct and the Chi-square test or Fisher's exact test are readily available for this purpose. This is not count data, as suggested later in the question.
If the petri dishes are considered flawlessly homogenous, then no further testing is necessary and the effect of treatment on cell morphology is complete. This approach would further be supported if each time a replicate of the treated and untreated dishes were run the replication was carried out in the same incubator with exchange of position in the incubator with the same set of reagents, etc. This situation would not create repeated measures, but would create the best experimental design for isolation of treatment effects provided the control was truly a control. A less optimal design would replicate on separate days, in multiple incubators with different lots of reagents, etc. Here, we cannot be sure that experimental conditions were homogenous between dishes and treatments effects may be lost as the control of the experimental conditions are lost.
If the OP wanted to evaluate the effect of replication on experimental results, then careful consideration of the experimental approach should be undertaken (details not provided by the OP for comment) and some hypothesis as to whether replication affected the treatment effect should be generated and tested based on the experiment as executed.
As other answers suggest, the best approach is probably to proceed with a series of generalized linear models with a binomial link function. To test the hypothesis that replication determines cell irregularity, one could test for the effect of replication as a dummy-coded variable using a model similar to: (pseudo-code):
irregular_cell = factor(replication)
If the number of groups is few and measurements within groups are many, this should suffice to test the hypothesis of replication effects on cell irregularity. If the groups are many and within-group measurements are few, then an assessment of replication effects is best estimated using a Generalized Estimating Equation or Random effects model. These models could further test the association of interactions between clusters and treatment and there are many CV references for this type of work.
Ultimately, the OP could find that replication explained a significant amount of variation and would then need to reconsider the experimental design or report treatment effects with main effects and/or standard errors adjusted for replication.
S. H. Hurlbert (1984) Pseudoreplication and the design of ecological field experiments, Ecological monographs 54(2) pp. 187 - 211 | How to perform a Binomial Test when having replicates? | There are many times in basic science (and other) research designs, where experiments are replicated and, at first glance, repeated measures seem appropriate. However, most procedures designed to hand | How to perform a Binomial Test when having replicates?
There are many times in basic science (and other) research designs, where experiments are replicated and, at first glance, repeated measures seem appropriate. However, most procedures designed to handle data derived from non-independent units, such as the paired t-test, require more than one observation taken on the same experimental unit. A replicated experiment or one in which numerous observations are derived from the same organism or set of conditions are not usually measurements on identical experimental units, while shared conditions do create an appropriate environment for evaluation of clustered effects. While the presence of group or cluster effects could be considered, often these designs assume that observations (e.g. cells) taken from between replications (e.g. dishes or mice) are sufficiently homogeneous to ignore potential clustering effects. Ignoring the potential for replicate-level effects by lumping the numerous between-cluster observations into one group or treating the data as repeated measures set the stage for errors in estimation of true effects of exposure/treatment.
In the OP's case, the notion of using a logistic procedure to model a binary outcome is appropriate (a cell is either irregular or non-irregular). The idea of comparing proportions is also correct and the Chi-square test or Fisher's exact test are readily available for this purpose. This is not count data, as suggested later in the question.
If the petri dishes are considered flawlessly homogenous, then no further testing is necessary and the effect of treatment on cell morphology is complete. This approach would further be supported if each time a replicate of the treated and untreated dishes were run the replication was carried out in the same incubator with exchange of position in the incubator with the same set of reagents, etc. This situation would not create repeated measures, but would create the best experimental design for isolation of treatment effects provided the control was truly a control. A less optimal design would replicate on separate days, in multiple incubators with different lots of reagents, etc. Here, we cannot be sure that experimental conditions were homogenous between dishes and treatments effects may be lost as the control of the experimental conditions are lost.
If the OP wanted to evaluate the effect of replication on experimental results, then careful consideration of the experimental approach should be undertaken (details not provided by the OP for comment) and some hypothesis as to whether replication affected the treatment effect should be generated and tested based on the experiment as executed.
As other answers suggest, the best approach is probably to proceed with a series of generalized linear models with a binomial link function. To test the hypothesis that replication determines cell irregularity, one could test for the effect of replication as a dummy-coded variable using a model similar to: (pseudo-code):
irregular_cell = factor(replication)
If the number of groups is few and measurements within groups are many, this should suffice to test the hypothesis of replication effects on cell irregularity. If the groups are many and within-group measurements are few, then an assessment of replication effects is best estimated using a Generalized Estimating Equation or Random effects model. These models could further test the association of interactions between clusters and treatment and there are many CV references for this type of work.
Ultimately, the OP could find that replication explained a significant amount of variation and would then need to reconsider the experimental design or report treatment effects with main effects and/or standard errors adjusted for replication.
S. H. Hurlbert (1984) Pseudoreplication and the design of ecological field experiments, Ecological monographs 54(2) pp. 187 - 211 | How to perform a Binomial Test when having replicates?
There are many times in basic science (and other) research designs, where experiments are replicated and, at first glance, repeated measures seem appropriate. However, most procedures designed to hand |
55,818 | How to perform a Binomial Test when having replicates? | If I understand correctly, you have 2 experimental conditions. Within each condition, you have three petry-dishes and within each petry-dish, you have the cells which you're counting. Assuming I understand correctly, You need to account for the fact that you have clustering in your data (cells are nested within dish). I think you should be able to analyze your data using a mixed-effects logistic regression with experimental condition as a predictor, irregularity (0/1) as the outcome and dish as the cluster. This method should also allow you compute CI's for your proportions (accounting for clustering). | How to perform a Binomial Test when having replicates? | If I understand correctly, you have 2 experimental conditions. Within each condition, you have three petry-dishes and within each petry-dish, you have the cells which you're counting. Assuming I under | How to perform a Binomial Test when having replicates?
If I understand correctly, you have 2 experimental conditions. Within each condition, you have three petry-dishes and within each petry-dish, you have the cells which you're counting. Assuming I understand correctly, You need to account for the fact that you have clustering in your data (cells are nested within dish). I think you should be able to analyze your data using a mixed-effects logistic regression with experimental condition as a predictor, irregularity (0/1) as the outcome and dish as the cluster. This method should also allow you compute CI's for your proportions (accounting for clustering). | How to perform a Binomial Test when having replicates?
If I understand correctly, you have 2 experimental conditions. Within each condition, you have three petry-dishes and within each petry-dish, you have the cells which you're counting. Assuming I under |
55,819 | How to perform a Binomial Test when having replicates? | Problem Statement:
As I understand it you have 6 petri dishes. You split them into 2 groups (A,B). Each group is identically treated, and you could $N_{irregular}/ N_{total} $.
You then want to compare treatments.
So sample data might be:
$$
\begin{matrix}
\mathbf{Dish } & \mathbf{Group} & \mathbf{N_{irregular}} & \mathbf{N_{total}} \\
1 & \textrm{A} & 4 & 114 \\
2 & \textrm{A} & 20 & 100 \\
3 & \textrm{A} & 1 & 85 \\
4 & \textrm{B} & 17 & 108 \\
5 & \textrm{B} & 16 & 82 \\
6 & \textrm{B} & 10 & 89
\end{matrix}$$
So how do you compare these?
Answer:
Here are our numbers:
mydata <- as.data.frame(cbind(c(1,2,3,4,5,6),
c("a","a","a","b","b","b"),
c(4,20,1,17,16,10),
c(114,100,85,108,82,89)))
names(mydata) <- c("test","group","N_irr","N_tot")
mydata
The output is this:
> mydata
test group N_irr N_tot
1 1 a 4 114
2 2 a 20 100
3 3 a 1 85
4 4 b 17 108
5 5 b 16 82
6 6 b 10 89
So now our numbers are in the computer, and we can do things like use them to make plots, or to do other analyses.
I like to start with "Gross reality checks". If you ask a biologist they might tell you that the human visual cortex has been critical for survival for 3 million years, and its ability to quickly and effectively process data is exceptional. I like to hack into that, harness it, and use it for math.
Here we turn the numbers into a picture.
...now working | How to perform a Binomial Test when having replicates? | Problem Statement:
As I understand it you have 6 petri dishes. You split them into 2 groups (A,B). Each group is identically treated, and you could $N_{irregular}/ N_{total} $.
You then want to co | How to perform a Binomial Test when having replicates?
Problem Statement:
As I understand it you have 6 petri dishes. You split them into 2 groups (A,B). Each group is identically treated, and you could $N_{irregular}/ N_{total} $.
You then want to compare treatments.
So sample data might be:
$$
\begin{matrix}
\mathbf{Dish } & \mathbf{Group} & \mathbf{N_{irregular}} & \mathbf{N_{total}} \\
1 & \textrm{A} & 4 & 114 \\
2 & \textrm{A} & 20 & 100 \\
3 & \textrm{A} & 1 & 85 \\
4 & \textrm{B} & 17 & 108 \\
5 & \textrm{B} & 16 & 82 \\
6 & \textrm{B} & 10 & 89
\end{matrix}$$
So how do you compare these?
Answer:
Here are our numbers:
mydata <- as.data.frame(cbind(c(1,2,3,4,5,6),
c("a","a","a","b","b","b"),
c(4,20,1,17,16,10),
c(114,100,85,108,82,89)))
names(mydata) <- c("test","group","N_irr","N_tot")
mydata
The output is this:
> mydata
test group N_irr N_tot
1 1 a 4 114
2 2 a 20 100
3 3 a 1 85
4 4 b 17 108
5 5 b 16 82
6 6 b 10 89
So now our numbers are in the computer, and we can do things like use them to make plots, or to do other analyses.
I like to start with "Gross reality checks". If you ask a biologist they might tell you that the human visual cortex has been critical for survival for 3 million years, and its ability to quickly and effectively process data is exceptional. I like to hack into that, harness it, and use it for math.
Here we turn the numbers into a picture.
...now working | How to perform a Binomial Test when having replicates?
Problem Statement:
As I understand it you have 6 petri dishes. You split them into 2 groups (A,B). Each group is identically treated, and you could $N_{irregular}/ N_{total} $.
You then want to co |
55,820 | Negative eigenvalues in principle component analysis in the presence of missing data | If you calculate pairwise correlation coefficients in presence of missing values, you correlation matrix may end up being non positive definite. In fact it's a very common phenomenon in quant finance. One way to deal with this issue is Ledoit Wolf procedure, see here. They developed a method for a different issue, but it's used for missing value issue too. An author has MATLAB code here.
Suppose you have three variables x,y and z. In observation 1 value of x is missing, but y and z are present. One way to calculate the correlation matrix is to skip the observation 1.
Another, seemingly better way is to skip observation 1 only when calculating pairwise correlations xy and xz, and use it for YZ correlation. Values of Y and Z are available in observation 1, why not use them? If you go this way then obtained correlation matrix may not be a good estimate of the true correlation matrix, surprisingly. Particularly, your matrix may not be positive definite. Again, this is a common situation in many finance applications such as portfolio optimization and PCA.
I would skip the observations with missing values IF the data size allows it. This is not always possible, e.g. sometimes we have hundreds variables and about as many observations. If we skip an observation when at least one variable value is missing, easily a half of the observations may be flagged. In this case, it's worth the trouble to do pairwise correlations using "all available" data, then shrink the matrix using Ledoit Wolf procedure. Otherwise, if it was just a couple of rows dropping out, then I wouldn't bother and skip them. | Negative eigenvalues in principle component analysis in the presence of missing data | If you calculate pairwise correlation coefficients in presence of missing values, you correlation matrix may end up being non positive definite. In fact it's a very common phenomenon in quant finance. | Negative eigenvalues in principle component analysis in the presence of missing data
If you calculate pairwise correlation coefficients in presence of missing values, you correlation matrix may end up being non positive definite. In fact it's a very common phenomenon in quant finance. One way to deal with this issue is Ledoit Wolf procedure, see here. They developed a method for a different issue, but it's used for missing value issue too. An author has MATLAB code here.
Suppose you have three variables x,y and z. In observation 1 value of x is missing, but y and z are present. One way to calculate the correlation matrix is to skip the observation 1.
Another, seemingly better way is to skip observation 1 only when calculating pairwise correlations xy and xz, and use it for YZ correlation. Values of Y and Z are available in observation 1, why not use them? If you go this way then obtained correlation matrix may not be a good estimate of the true correlation matrix, surprisingly. Particularly, your matrix may not be positive definite. Again, this is a common situation in many finance applications such as portfolio optimization and PCA.
I would skip the observations with missing values IF the data size allows it. This is not always possible, e.g. sometimes we have hundreds variables and about as many observations. If we skip an observation when at least one variable value is missing, easily a half of the observations may be flagged. In this case, it's worth the trouble to do pairwise correlations using "all available" data, then shrink the matrix using Ledoit Wolf procedure. Otherwise, if it was just a couple of rows dropping out, then I wouldn't bother and skip them. | Negative eigenvalues in principle component analysis in the presence of missing data
If you calculate pairwise correlation coefficients in presence of missing values, you correlation matrix may end up being non positive definite. In fact it's a very common phenomenon in quant finance. |
55,821 | Negative eigenvalues in principle component analysis in the presence of missing data | After the discussion here, I can provide at least a partial answer.
Apparently, pairwise calculation of correlation coefficients, especially if the data matrix has missing data, leads to a correlation matrix that is only positive semi-definite, not positive definite. Eigenvalues that are too extreme. According to http://epublications.bond.edu.au/cgi/viewcontent.cgi?article=1099&context=ejsie it is possible to counteract this effect by "shrinking" the matrix. To this purpose, a weighted average of the correlation matrix $R$ with a form matrix $F$ is calculated: $\hat{r}_{ij} = (1-\omega) r_{ij} + \omega f_{ij}$ with $\omega$ a weight factor from [0..1] (in the literature, it is often called $\lambda$, but as this symbol is used for eigenvalues already I have re-christend it). According to https://cssanalytics.files.wordpress.com/2013/10/shrinkage-simpler-is-better.pdf it doesn't really matter which form matrix one uses, I have tried both the identity matrix $I$ and the average matrix $\bar{R}$ (for each variable $i$ calculate the average correlation with all other variables $\bar{r}_i$, then $\bar{r}_{ij} = (\bar{r}_i + \bar{r}_j)/2$). The results are indeed very similar.
What I still don't understand is why $R$ is so affected by missing data. I have done simulations with 10,000 data from $y = 1 + 2x + \epsilon$, with $\epsilon$ a Gaussian random number so that $r = 0.825$. Then I have set random elements of this matrix to NaN and recalculated $r$. For up to 20% missing data, the maximal deviation of $r$ was 0.01, and most of the data fell within ± 0.005. Even for 50% missing data the maximum deviation was 0.015, for 70% 0.02. Surely this is not such a big effect? | Negative eigenvalues in principle component analysis in the presence of missing data | After the discussion here, I can provide at least a partial answer.
Apparently, pairwise calculation of correlation coefficients, especially if the data matrix has missing data, leads to a correlatio | Negative eigenvalues in principle component analysis in the presence of missing data
After the discussion here, I can provide at least a partial answer.
Apparently, pairwise calculation of correlation coefficients, especially if the data matrix has missing data, leads to a correlation matrix that is only positive semi-definite, not positive definite. Eigenvalues that are too extreme. According to http://epublications.bond.edu.au/cgi/viewcontent.cgi?article=1099&context=ejsie it is possible to counteract this effect by "shrinking" the matrix. To this purpose, a weighted average of the correlation matrix $R$ with a form matrix $F$ is calculated: $\hat{r}_{ij} = (1-\omega) r_{ij} + \omega f_{ij}$ with $\omega$ a weight factor from [0..1] (in the literature, it is often called $\lambda$, but as this symbol is used for eigenvalues already I have re-christend it). According to https://cssanalytics.files.wordpress.com/2013/10/shrinkage-simpler-is-better.pdf it doesn't really matter which form matrix one uses, I have tried both the identity matrix $I$ and the average matrix $\bar{R}$ (for each variable $i$ calculate the average correlation with all other variables $\bar{r}_i$, then $\bar{r}_{ij} = (\bar{r}_i + \bar{r}_j)/2$). The results are indeed very similar.
What I still don't understand is why $R$ is so affected by missing data. I have done simulations with 10,000 data from $y = 1 + 2x + \epsilon$, with $\epsilon$ a Gaussian random number so that $r = 0.825$. Then I have set random elements of this matrix to NaN and recalculated $r$. For up to 20% missing data, the maximal deviation of $r$ was 0.01, and most of the data fell within ± 0.005. Even for 50% missing data the maximum deviation was 0.015, for 70% 0.02. Surely this is not such a big effect? | Negative eigenvalues in principle component analysis in the presence of missing data
After the discussion here, I can provide at least a partial answer.
Apparently, pairwise calculation of correlation coefficients, especially if the data matrix has missing data, leads to a correlatio |
55,822 | Is there a Bayesian analogue to a simultaneous confidence band? | Given a prior distribution $\pi$ on a functional space and observations about the function values at some points, or noisy observations of the function itself, the posterior distribution $\pi(\cdot|\mathcal{D})$ can be used to derive an HPD region,
$$\left\{f\in\mathcal{F}\,,\ \pi(f |\mathcal{D})\ge k_\alpha\right\}$$
at least in principle since the derivation may prove too complex in a general situation.
For instance,
Breth, M. (1978) Bayesian confidence bands for estimating a function. Annals of Statistics Vol. 6, No. 3, pp. 649-657
seems to address this problem. | Is there a Bayesian analogue to a simultaneous confidence band? | Given a prior distribution $\pi$ on a functional space and observations about the function values at some points, or noisy observations of the function itself, the posterior distribution $\pi(\cdot|\m | Is there a Bayesian analogue to a simultaneous confidence band?
Given a prior distribution $\pi$ on a functional space and observations about the function values at some points, or noisy observations of the function itself, the posterior distribution $\pi(\cdot|\mathcal{D})$ can be used to derive an HPD region,
$$\left\{f\in\mathcal{F}\,,\ \pi(f |\mathcal{D})\ge k_\alpha\right\}$$
at least in principle since the derivation may prove too complex in a general situation.
For instance,
Breth, M. (1978) Bayesian confidence bands for estimating a function. Annals of Statistics Vol. 6, No. 3, pp. 649-657
seems to address this problem. | Is there a Bayesian analogue to a simultaneous confidence band?
Given a prior distribution $\pi$ on a functional space and observations about the function values at some points, or noisy observations of the function itself, the posterior distribution $\pi(\cdot|\m |
55,823 | BIC vs. intuition | It's not an obvious question at all! In fact, I think there may be some disagreement even among statisticians.
My view is that you should never let the computer do your thinking for you. Don't blindly accept anything. However, don't blindly reject anything, either. My favorite professor in grad school, Herman Friedman, used to say "if you're not surprised, you haven't learned anything". Well, here you are surprised. Why?
One clear possibility from your description is collinearity. The three variables you mention (wealth, age, education) are related. When there is collinearity, weird things can happen.
If you add more details of the situation and the model, someone (either me or someone else) may be able to offer more insight. | BIC vs. intuition | It's not an obvious question at all! In fact, I think there may be some disagreement even among statisticians.
My view is that you should never let the computer do your thinking for you. Don't blind | BIC vs. intuition
It's not an obvious question at all! In fact, I think there may be some disagreement even among statisticians.
My view is that you should never let the computer do your thinking for you. Don't blindly accept anything. However, don't blindly reject anything, either. My favorite professor in grad school, Herman Friedman, used to say "if you're not surprised, you haven't learned anything". Well, here you are surprised. Why?
One clear possibility from your description is collinearity. The three variables you mention (wealth, age, education) are related. When there is collinearity, weird things can happen.
If you add more details of the situation and the model, someone (either me or someone else) may be able to offer more insight. | BIC vs. intuition
It's not an obvious question at all! In fact, I think there may be some disagreement even among statisticians.
My view is that you should never let the computer do your thinking for you. Don't blind |
55,824 | Comparing estimators of equal risk | The statement as reported is wrong: A most standard example is provided by the James-Stein estimator: given $X\sim\mathcal{N}_p(\theta,I_p)$ $(p>2)$, assuming $\theta$ is estimated under the square error loss,$$L(\theta,\delta)=||\theta-\delta||^2$$the estimators$$\delta_0(x)=x\qquad\text{and}\qquad \delta_{2(p-2)}(x)=\left(1-\frac{2(p-2)}{||x||^2}\right) x$$have exactly the same risk:
$$\mathbb{E}_\theta[||\theta-\delta_0(X)||^2]=\mathbb{E}_\theta[||\theta-\delta_{2(p-2)}(X)||^2]=p$$(The proof goes by Stein's technique of the unbiased estimator of the risk and can be found [e.g.] in my Bayesian Choice book.) Actually, when two estimators share the same risk functions, they are inadmissible under strictly convex losses in that the average of these two estimators is improving the risk:
$$R(\theta,\{\delta_1+\delta_2\}/2) < R(\theta,\delta_1)=R(\theta,\delta_2)$$which is what is happening with the James-Stein estimators:
$$R(\theta,\{\delta_0+\delta_{2(p-2)}\}/2) < R(\theta,\delta_0)=R(\theta,\delta_{2(p-2)})=p$$with$$\frac{\delta_0+\delta_{2(p-2)}}{2}=\delta_{(p-2)}$$
I presume the confusion stemmed from the concept of completeness,
where having a function with constant expectation under all values of
the parameter implies that this function is constant. But, for loss
functions, the concept does not apply since the loss depends both on
the observation (that is complete in a Normal model) and on the
parameter. | Comparing estimators of equal risk | The statement as reported is wrong: A most standard example is provided by the James-Stein estimator: given $X\sim\mathcal{N}_p(\theta,I_p)$ $(p>2)$, assuming $\theta$ is estimated under the square er | Comparing estimators of equal risk
The statement as reported is wrong: A most standard example is provided by the James-Stein estimator: given $X\sim\mathcal{N}_p(\theta,I_p)$ $(p>2)$, assuming $\theta$ is estimated under the square error loss,$$L(\theta,\delta)=||\theta-\delta||^2$$the estimators$$\delta_0(x)=x\qquad\text{and}\qquad \delta_{2(p-2)}(x)=\left(1-\frac{2(p-2)}{||x||^2}\right) x$$have exactly the same risk:
$$\mathbb{E}_\theta[||\theta-\delta_0(X)||^2]=\mathbb{E}_\theta[||\theta-\delta_{2(p-2)}(X)||^2]=p$$(The proof goes by Stein's technique of the unbiased estimator of the risk and can be found [e.g.] in my Bayesian Choice book.) Actually, when two estimators share the same risk functions, they are inadmissible under strictly convex losses in that the average of these two estimators is improving the risk:
$$R(\theta,\{\delta_1+\delta_2\}/2) < R(\theta,\delta_1)=R(\theta,\delta_2)$$which is what is happening with the James-Stein estimators:
$$R(\theta,\{\delta_0+\delta_{2(p-2)}\}/2) < R(\theta,\delta_0)=R(\theta,\delta_{2(p-2)})=p$$with$$\frac{\delta_0+\delta_{2(p-2)}}{2}=\delta_{(p-2)}$$
I presume the confusion stemmed from the concept of completeness,
where having a function with constant expectation under all values of
the parameter implies that this function is constant. But, for loss
functions, the concept does not apply since the loss depends both on
the observation (that is complete in a Normal model) and on the
parameter. | Comparing estimators of equal risk
The statement as reported is wrong: A most standard example is provided by the James-Stein estimator: given $X\sim\mathcal{N}_p(\theta,I_p)$ $(p>2)$, assuming $\theta$ is estimated under the square er |
55,825 | Expectation of Sum of Gamma over Product of Inverse-Gamma | This is rather straightforward (when the $X_i$'s are independent):
\begin{align*}\mathbb{E}\left(\cfrac{\sum_{i=1}^n X_i}{(\prod_{j=1}^n X_j)^{1/n}}\right) &= \sum_{i=1}^n \mathbb{E}\left(\cfrac{ X_i}{(\prod_{j=1}^n X_j)^{1/n}}\right)\\
&= \sum_{i=1}^n \mathbb{E}[X_i^{1-1/n}]\times \mathbb{E}\left(\cfrac{1}{(\prod_{j\ne i} X_j)^{1/n}}\right)\\
&= \sum_{i=1}^n \mathbb{E}[X_i^{1-1/n}]\times \prod_{j\ne i}\mathbb{E}\left[X_j^{-1/n}\right]\\
&= n\mathbb{E}[X_1^{1-1/n}]\mathbb{E}\left[X_1^{-1/n}\right]^{n-1}\\
&= n\times \beta^{1/n-1}\dfrac{\Gamma(\alpha+1-1/n)}{\Gamma(\alpha)}\times\left[\beta^{1/n}\dfrac{\Gamma(\alpha-1/n)}{\Gamma(\alpha)}\right]^{n-1}\\
&=n\times \beta^{1/n-1+(n-1)/n}\times\dfrac{\Gamma(\alpha+1-1/n)\Gamma(\alpha-1/n)^{n-1}}{\Gamma(\alpha)^n}\\
&=n\times \dfrac{(\alpha-1/n)\Gamma(\alpha-1/n)^{n}}{\Gamma(\alpha)^n}\end{align*}
If considering the expectation of the ratio of the arithmetic mean to the geometric mean, i.e., when divding the above by $n$, one obtains a ratio that converges to $1$ with $\alpha\to\infty$ | Expectation of Sum of Gamma over Product of Inverse-Gamma | This is rather straightforward (when the $X_i$'s are independent):
\begin{align*}\mathbb{E}\left(\cfrac{\sum_{i=1}^n X_i}{(\prod_{j=1}^n X_j)^{1/n}}\right) &= \sum_{i=1}^n \mathbb{E}\left(\cfrac{ X_i | Expectation of Sum of Gamma over Product of Inverse-Gamma
This is rather straightforward (when the $X_i$'s are independent):
\begin{align*}\mathbb{E}\left(\cfrac{\sum_{i=1}^n X_i}{(\prod_{j=1}^n X_j)^{1/n}}\right) &= \sum_{i=1}^n \mathbb{E}\left(\cfrac{ X_i}{(\prod_{j=1}^n X_j)^{1/n}}\right)\\
&= \sum_{i=1}^n \mathbb{E}[X_i^{1-1/n}]\times \mathbb{E}\left(\cfrac{1}{(\prod_{j\ne i} X_j)^{1/n}}\right)\\
&= \sum_{i=1}^n \mathbb{E}[X_i^{1-1/n}]\times \prod_{j\ne i}\mathbb{E}\left[X_j^{-1/n}\right]\\
&= n\mathbb{E}[X_1^{1-1/n}]\mathbb{E}\left[X_1^{-1/n}\right]^{n-1}\\
&= n\times \beta^{1/n-1}\dfrac{\Gamma(\alpha+1-1/n)}{\Gamma(\alpha)}\times\left[\beta^{1/n}\dfrac{\Gamma(\alpha-1/n)}{\Gamma(\alpha)}\right]^{n-1}\\
&=n\times \beta^{1/n-1+(n-1)/n}\times\dfrac{\Gamma(\alpha+1-1/n)\Gamma(\alpha-1/n)^{n-1}}{\Gamma(\alpha)^n}\\
&=n\times \dfrac{(\alpha-1/n)\Gamma(\alpha-1/n)^{n}}{\Gamma(\alpha)^n}\end{align*}
If considering the expectation of the ratio of the arithmetic mean to the geometric mean, i.e., when divding the above by $n$, one obtains a ratio that converges to $1$ with $\alpha\to\infty$ | Expectation of Sum of Gamma over Product of Inverse-Gamma
This is rather straightforward (when the $X_i$'s are independent):
\begin{align*}\mathbb{E}\left(\cfrac{\sum_{i=1}^n X_i}{(\prod_{j=1}^n X_j)^{1/n}}\right) &= \sum_{i=1}^n \mathbb{E}\left(\cfrac{ X_i |
55,826 | Simulation of a Poisson Process | You want to get a function of $t$ that gives the count of events. So simply do
n_func <- function(t, S) sapply(t, function(t) sum(S <= t))
t_series <- seq(0, max(S), by = max(S)/100)
plot(t_series, n_func(t_series, S)
$S$ is basically the time stamps of each Poisson events in your sample. So you want to just count the number of events that have time stamped before $t$. | Simulation of a Poisson Process | You want to get a function of $t$ that gives the count of events. So simply do
n_func <- function(t, S) sapply(t, function(t) sum(S <= t))
t_series <- seq(0, max(S), by = max(S)/100)
plot(t_series, n_ | Simulation of a Poisson Process
You want to get a function of $t$ that gives the count of events. So simply do
n_func <- function(t, S) sapply(t, function(t) sum(S <= t))
t_series <- seq(0, max(S), by = max(S)/100)
plot(t_series, n_func(t_series, S)
$S$ is basically the time stamps of each Poisson events in your sample. So you want to just count the number of events that have time stamped before $t$. | Simulation of a Poisson Process
You want to get a function of $t$ that gives the count of events. So simply do
n_func <- function(t, S) sapply(t, function(t) sum(S <= t))
t_series <- seq(0, max(S), by = max(S)/100)
plot(t_series, n_ |
55,827 | Simulation of a Poisson Process | The Poisson process is describing how events occur over time. The inter-arrival times (i.e., the times between each successive event) are distributed as an exponential. In the algorithm provided, $S_n$ is the total amount of time that has elapsed since you started recording until the $n$th event occurred. Since your $X_i$s are the times between each event, you sum them from the start up to the event of interest to get the total elapsed time until that event. The $N_t$s are the number of events that have occurred up to any given point in time. Thus, the plot would be a set of horizontal lines at successive integers, where the x-axis was time.
Here is a simple example, coded in R:
set.seed(3535) # this makes the example exactly reproducible
# there's no need for an explicit s[0]
X = rexp(n=100, rate=0.1) # step 2
S = cumsum(X) # step 3
N = 0:100 # step 4
windows()
plot(stepfun(x=S, y=N), main="Counting Process",
xlab="Time", ylab="Number of events") | Simulation of a Poisson Process | The Poisson process is describing how events occur over time. The inter-arrival times (i.e., the times between each successive event) are distributed as an exponential. In the algorithm provided, $S | Simulation of a Poisson Process
The Poisson process is describing how events occur over time. The inter-arrival times (i.e., the times between each successive event) are distributed as an exponential. In the algorithm provided, $S_n$ is the total amount of time that has elapsed since you started recording until the $n$th event occurred. Since your $X_i$s are the times between each event, you sum them from the start up to the event of interest to get the total elapsed time until that event. The $N_t$s are the number of events that have occurred up to any given point in time. Thus, the plot would be a set of horizontal lines at successive integers, where the x-axis was time.
Here is a simple example, coded in R:
set.seed(3535) # this makes the example exactly reproducible
# there's no need for an explicit s[0]
X = rexp(n=100, rate=0.1) # step 2
S = cumsum(X) # step 3
N = 0:100 # step 4
windows()
plot(stepfun(x=S, y=N), main="Counting Process",
xlab="Time", ylab="Number of events") | Simulation of a Poisson Process
The Poisson process is describing how events occur over time. The inter-arrival times (i.e., the times between each successive event) are distributed as an exponential. In the algorithm provided, $S |
55,828 | Policy Iteration Diagram in Jack’s Car Rental (in reinforcement learning) | The stepped curves are showing the contours of the different policy actions, as a map over the state space. They are a choice of visualisation of the policy, which has 441 states, and would not look quite so intuitive listed as a table.
The numbers are the number of cars that the policy decides to move from first location to second location.
You can look up the optimal action from the $\pi_4$ graph for a specific number of cars at each location by finding the grid point $(n_{2}, n_{1})$ for it (reading horizontal axis first) and seeing what the number is inside that area - move that number of cars from first to second location.
The final image shows the state value function of the optimal policy as a 3D surface with the base being the state and the height being the value.
When I did this exercise, I could not find how to get the labeled contours using matplotlib, so I made a colour map instead:
Hotter colour increments mean move cars from first location to second location, the map orientation is different to the book. | Policy Iteration Diagram in Jack’s Car Rental (in reinforcement learning) | The stepped curves are showing the contours of the different policy actions, as a map over the state space. They are a choice of visualisation of the policy, which has 441 states, and would not look q | Policy Iteration Diagram in Jack’s Car Rental (in reinforcement learning)
The stepped curves are showing the contours of the different policy actions, as a map over the state space. They are a choice of visualisation of the policy, which has 441 states, and would not look quite so intuitive listed as a table.
The numbers are the number of cars that the policy decides to move from first location to second location.
You can look up the optimal action from the $\pi_4$ graph for a specific number of cars at each location by finding the grid point $(n_{2}, n_{1})$ for it (reading horizontal axis first) and seeing what the number is inside that area - move that number of cars from first to second location.
The final image shows the state value function of the optimal policy as a 3D surface with the base being the state and the height being the value.
When I did this exercise, I could not find how to get the labeled contours using matplotlib, so I made a colour map instead:
Hotter colour increments mean move cars from first location to second location, the map orientation is different to the book. | Policy Iteration Diagram in Jack’s Car Rental (in reinforcement learning)
The stepped curves are showing the contours of the different policy actions, as a map over the state space. They are a choice of visualisation of the policy, which has 441 states, and would not look q |
55,829 | Intuitive reason why jointly normal and uncorrelated imply independence | Well, what intuition can there be? For a bivariate normal distribution (for $X$ and $Y$, say), uncorrelated means independence of $X$ and $Y$, while for the quite similar bivariate t distribution, with say 100 degrees of freedom, independence do not follow from correlation zero. Plotted this two distributions will look quite similar. For both distributions all contours of the joint density function are elliptical.
The only intuition that I can see is algebraic, the joint density for the bivariate normal is a constant times an exponential function. The argument of the exponential function is a quadratic polynomial in $x,y$. When the correlation is zero, this polynomial will not include any cross terms $xy$, only pure quadratic terms $x^2, y^2$. Then the property of the exponential function that
$$
\exp(-x^2 -y^2) = \exp(-x^2)\cdot \exp(-y^2)
$$
kicks in (of course the actual terms will be more complicated, but that does not change the idea). If you try to do the same with the bivariate t distribution, everything is the same, except---the quadatic polynomial sits inside the argument of another function without that nice separation property of the exponential! Thats the only intuition that I can see. | Intuitive reason why jointly normal and uncorrelated imply independence | Well, what intuition can there be? For a bivariate normal distribution (for $X$ and $Y$, say), uncorrelated means independence of $X$ and $Y$, while for the quite similar bivariate t distribution, wit | Intuitive reason why jointly normal and uncorrelated imply independence
Well, what intuition can there be? For a bivariate normal distribution (for $X$ and $Y$, say), uncorrelated means independence of $X$ and $Y$, while for the quite similar bivariate t distribution, with say 100 degrees of freedom, independence do not follow from correlation zero. Plotted this two distributions will look quite similar. For both distributions all contours of the joint density function are elliptical.
The only intuition that I can see is algebraic, the joint density for the bivariate normal is a constant times an exponential function. The argument of the exponential function is a quadratic polynomial in $x,y$. When the correlation is zero, this polynomial will not include any cross terms $xy$, only pure quadratic terms $x^2, y^2$. Then the property of the exponential function that
$$
\exp(-x^2 -y^2) = \exp(-x^2)\cdot \exp(-y^2)
$$
kicks in (of course the actual terms will be more complicated, but that does not change the idea). If you try to do the same with the bivariate t distribution, everything is the same, except---the quadatic polynomial sits inside the argument of another function without that nice separation property of the exponential! Thats the only intuition that I can see. | Intuitive reason why jointly normal and uncorrelated imply independence
Well, what intuition can there be? For a bivariate normal distribution (for $X$ and $Y$, say), uncorrelated means independence of $X$ and $Y$, while for the quite similar bivariate t distribution, wit |
55,830 | Expected value of SRSWOR sample maximum | Just about all answers will have to be mathematically equivalent. The point of this one is to develop a solution in the laziest possible way: that is, by pure reasoning unaccompanied by any calculation at all.
There are $\binom{N}{n}$ possible and equally likely samples, since each sample is a subset of $n$ of the $N$ elements. (For those new to such notation, $\binom{N}{n}$ can be defined as the number of distinct samples of size $n$ without replacement from $N$ things: that is, the number of $n$-subsets of $N$ things. In this answer you will not need to know any formulas for these quantities.)
A sample with maximum value $k \ge n$ consists of the number $k$ together with a subset of $n-1$ of the remaining $k-1$ elements smaller than $k$. There are $\binom{k-1}{n-1}$ of these.
To obtain the expectation, by definition we must multiply the probability of each such sample, $1/\binom{N}{n},$ by the value of its maximum, $k,$ and add these up:
$$\mathbb{E}(\text{maximum}) = \frac{1}{\binom{N}{n}}\sum_{k=n}^N k\binom{k-1}{n-1}.$$
So much for the statistics. The rest is combinatorics. Our purpose is to obtain a succinct numerical formula for this rather abstract looking sum.
You can evaluate the sum by doing very little calculation indeed. One way begins by interpreting the term $k\binom{k-1}{n-1}$ as a count: it is the number of ways you can pick one of $k$ things and independently select $n-1$ of the remaining $k-1$ things. Equivalently, you could have selected all $n$ of those things (in $\binom{k}{n}$ ways) and then chosen one of those $n$ things as the "first" pick. Since there are $n$ such choices,
$$k\binom{k-1}{n-1} = n\binom{k}{n}.\tag{*}$$
Take the constant factor of $n$ out of the sum:
$$\mathbb{E}(\text{maximum}) = \frac{1}{\binom{N}{n}}\, n\, \sum_{k=n}^N \binom{k}{n}.$$
What could this sum count? Almost the same argument applies: associated with each $n+1$-subset of $N+1$ things are (1) its maximum, which I will call $k+1$, and (2) an $n$-subset of the remaining $k$ smaller things. The sum counts these by partitioning the possibilities for such subsets by the values of their maxima. Consequently, it counts all $n+1$-subsets of $N+1$ things and therefore equals $\binom{N+1}{n+1}$. Plugging this into the expectation (and not forgetting the constant factor of $n$) gives
$$\mathbb{E}(\text{maximum}) = \frac{1}{\binom{N}{n}} \left(n\binom{N+1}{n+1}\right).\tag{**}$$
The stuff in parentheses looks a lot like the counting result we obtained in $(*)$. We can make it so by multiplying by $n+1$ and dividing by the same value:
$$n\binom{N+1}{n+1} = \frac{n}{n+1} (n+1) \binom{N+1}{n+1} = \frac{n}{n+1}(N+1)\binom{N}{n}.$$
Plugging this into $(**)$ cancels the $\binom{N}{n}$ in the fraction (which is why we never needed a formula for it), leaving the simple result
$$\mathbb{E}(\text{maximum}) = \frac{n}{n+1}(N+1).$$ | Expected value of SRSWOR sample maximum | Just about all answers will have to be mathematically equivalent. The point of this one is to develop a solution in the laziest possible way: that is, by pure reasoning unaccompanied by any calculati | Expected value of SRSWOR sample maximum
Just about all answers will have to be mathematically equivalent. The point of this one is to develop a solution in the laziest possible way: that is, by pure reasoning unaccompanied by any calculation at all.
There are $\binom{N}{n}$ possible and equally likely samples, since each sample is a subset of $n$ of the $N$ elements. (For those new to such notation, $\binom{N}{n}$ can be defined as the number of distinct samples of size $n$ without replacement from $N$ things: that is, the number of $n$-subsets of $N$ things. In this answer you will not need to know any formulas for these quantities.)
A sample with maximum value $k \ge n$ consists of the number $k$ together with a subset of $n-1$ of the remaining $k-1$ elements smaller than $k$. There are $\binom{k-1}{n-1}$ of these.
To obtain the expectation, by definition we must multiply the probability of each such sample, $1/\binom{N}{n},$ by the value of its maximum, $k,$ and add these up:
$$\mathbb{E}(\text{maximum}) = \frac{1}{\binom{N}{n}}\sum_{k=n}^N k\binom{k-1}{n-1}.$$
So much for the statistics. The rest is combinatorics. Our purpose is to obtain a succinct numerical formula for this rather abstract looking sum.
You can evaluate the sum by doing very little calculation indeed. One way begins by interpreting the term $k\binom{k-1}{n-1}$ as a count: it is the number of ways you can pick one of $k$ things and independently select $n-1$ of the remaining $k-1$ things. Equivalently, you could have selected all $n$ of those things (in $\binom{k}{n}$ ways) and then chosen one of those $n$ things as the "first" pick. Since there are $n$ such choices,
$$k\binom{k-1}{n-1} = n\binom{k}{n}.\tag{*}$$
Take the constant factor of $n$ out of the sum:
$$\mathbb{E}(\text{maximum}) = \frac{1}{\binom{N}{n}}\, n\, \sum_{k=n}^N \binom{k}{n}.$$
What could this sum count? Almost the same argument applies: associated with each $n+1$-subset of $N+1$ things are (1) its maximum, which I will call $k+1$, and (2) an $n$-subset of the remaining $k$ smaller things. The sum counts these by partitioning the possibilities for such subsets by the values of their maxima. Consequently, it counts all $n+1$-subsets of $N+1$ things and therefore equals $\binom{N+1}{n+1}$. Plugging this into the expectation (and not forgetting the constant factor of $n$) gives
$$\mathbb{E}(\text{maximum}) = \frac{1}{\binom{N}{n}} \left(n\binom{N+1}{n+1}\right).\tag{**}$$
The stuff in parentheses looks a lot like the counting result we obtained in $(*)$. We can make it so by multiplying by $n+1$ and dividing by the same value:
$$n\binom{N+1}{n+1} = \frac{n}{n+1} (n+1) \binom{N+1}{n+1} = \frac{n}{n+1}(N+1)\binom{N}{n}.$$
Plugging this into $(**)$ cancels the $\binom{N}{n}$ in the fraction (which is why we never needed a formula for it), leaving the simple result
$$\mathbb{E}(\text{maximum}) = \frac{n}{n+1}(N+1).$$ | Expected value of SRSWOR sample maximum
Just about all answers will have to be mathematically equivalent. The point of this one is to develop a solution in the laziest possible way: that is, by pure reasoning unaccompanied by any calculati |
55,831 | Expected value of SRSWOR sample maximum | If you are sampling from the discrete uniform population (without replacement, as you have stipulated), then it is the German Tank Problem.
Let the sample be $X_1, X_2, \ldots , X_n$ with $Y=\textrm{max} \left( X_1, X_2, \ldots , X_n \right).$
The joint mass function is $$f \left( x_1, x_2, \ldots , x_n \right)=\frac{1}{N \left( N-1 \right) \cdots \left( N-n+1 \right) }$$
The cdf of $Y$ is $$P[Y \le y]=P[X_1 \le y, X_2 \le y, \ldots , X_n \le y]=\frac{y \left( y-1 \right) \cdots \left( y-n+1 \right) }{N \left( N - 1 \right) \cdots \left( N-n+1 \right) } =\frac{{y \choose n}}{N \choose n}$$
Then the probability mass function will be $$g(y) = P[Y=y]=P[Y \le y] - P[Y \le y-1] = \frac{{y \choose n}-{y-1 \choose n}}{N \choose n}=\frac{{y-1} \choose {n-1}}{N \choose n} \ , \textrm{where} \ y \ge n $$
The expected value is then $$E[Y]=\sum_{y=n}^{N} y \ g(y)=\sum_{y=n}^N y \frac{{y-1} \choose {n-1}}{N \choose n}=\frac{\sum_{y=n}^N y {{y-1} \choose {n-1}}}{N \choose n}=\frac{n \sum_{y=n}^{N} {y \choose n}}{N \choose n}$$
The Hockey-stick identity is $$\sum_{y=n}^N {y \choose n} = {{N+1} \choose {n+1}}$$
So $$E[Y]= \frac{n {{N+1} \choose {n+1}}}{N \choose n}=\left( \frac{n}{n+1} \right) \left( N+1 \right)$$
This approach is given in Tenenbein (The Racing Car Problem, $\it{The
\ American \ Statistician}$, February 1971). | Expected value of SRSWOR sample maximum | If you are sampling from the discrete uniform population (without replacement, as you have stipulated), then it is the German Tank Problem.
Let the sample be $X_1, X_2, \ldots , X_n$ with $Y=\textrm{ | Expected value of SRSWOR sample maximum
If you are sampling from the discrete uniform population (without replacement, as you have stipulated), then it is the German Tank Problem.
Let the sample be $X_1, X_2, \ldots , X_n$ with $Y=\textrm{max} \left( X_1, X_2, \ldots , X_n \right).$
The joint mass function is $$f \left( x_1, x_2, \ldots , x_n \right)=\frac{1}{N \left( N-1 \right) \cdots \left( N-n+1 \right) }$$
The cdf of $Y$ is $$P[Y \le y]=P[X_1 \le y, X_2 \le y, \ldots , X_n \le y]=\frac{y \left( y-1 \right) \cdots \left( y-n+1 \right) }{N \left( N - 1 \right) \cdots \left( N-n+1 \right) } =\frac{{y \choose n}}{N \choose n}$$
Then the probability mass function will be $$g(y) = P[Y=y]=P[Y \le y] - P[Y \le y-1] = \frac{{y \choose n}-{y-1 \choose n}}{N \choose n}=\frac{{y-1} \choose {n-1}}{N \choose n} \ , \textrm{where} \ y \ge n $$
The expected value is then $$E[Y]=\sum_{y=n}^{N} y \ g(y)=\sum_{y=n}^N y \frac{{y-1} \choose {n-1}}{N \choose n}=\frac{\sum_{y=n}^N y {{y-1} \choose {n-1}}}{N \choose n}=\frac{n \sum_{y=n}^{N} {y \choose n}}{N \choose n}$$
The Hockey-stick identity is $$\sum_{y=n}^N {y \choose n} = {{N+1} \choose {n+1}}$$
So $$E[Y]= \frac{n {{N+1} \choose {n+1}}}{N \choose n}=\left( \frac{n}{n+1} \right) \left( N+1 \right)$$
This approach is given in Tenenbein (The Racing Car Problem, $\it{The
\ American \ Statistician}$, February 1971). | Expected value of SRSWOR sample maximum
If you are sampling from the discrete uniform population (without replacement, as you have stipulated), then it is the German Tank Problem.
Let the sample be $X_1, X_2, \ldots , X_n$ with $Y=\textrm{ |
55,832 | Expected value of SRSWOR sample maximum | As with each and every sampling problem, the answer is "Yes, if you have access to the population and to an algorithm that efficiently enumerates all possible samples". So if you are talking about samples of size 3 out of population of size 7, then yes, you can probably derive that. If you are talking about realistic samples of size 1000 from a country's population, then you can probably forget it.
The central methodological issue is that finite population inference is non-parametric. Inference with respect to the sampling design works for the population $\{1,2,3,4,5,6,7\}$ as well as for the population $\{1,1,1,1,1,1,1,10^6\}$. By contrast, the existing methods of dealing with extreme values rely on assuming a smooth tail of a continuous distribution, for which one of the three functional forms established by Gnedenko would be applicable, and an i.i.d. sample (which isn't quite the same as SRS).
You can approximately fake an i.i.d. argument if you have a good approximation for the tail of your population, and if you have a simple enough design (SRSWR or SRSWOR with a negligible sampling fraction). | Expected value of SRSWOR sample maximum | As with each and every sampling problem, the answer is "Yes, if you have access to the population and to an algorithm that efficiently enumerates all possible samples". So if you are talking about sam | Expected value of SRSWOR sample maximum
As with each and every sampling problem, the answer is "Yes, if you have access to the population and to an algorithm that efficiently enumerates all possible samples". So if you are talking about samples of size 3 out of population of size 7, then yes, you can probably derive that. If you are talking about realistic samples of size 1000 from a country's population, then you can probably forget it.
The central methodological issue is that finite population inference is non-parametric. Inference with respect to the sampling design works for the population $\{1,2,3,4,5,6,7\}$ as well as for the population $\{1,1,1,1,1,1,1,10^6\}$. By contrast, the existing methods of dealing with extreme values rely on assuming a smooth tail of a continuous distribution, for which one of the three functional forms established by Gnedenko would be applicable, and an i.i.d. sample (which isn't quite the same as SRS).
You can approximately fake an i.i.d. argument if you have a good approximation for the tail of your population, and if you have a simple enough design (SRSWR or SRSWOR with a negligible sampling fraction). | Expected value of SRSWOR sample maximum
As with each and every sampling problem, the answer is "Yes, if you have access to the population and to an algorithm that efficiently enumerates all possible samples". So if you are talking about sam |
55,833 | Variational Autoencoder and validation loss | The short answer is: Don't drop the KL term.
The reconstruction error plus KL term optimized by a VAE is a lower bound on the log-likelihood (also called the "evidence lower bound", or ELBO) [1]. Log-likelihood is one way to measure how well your model explains the data. If that's what you're after, it makes sense to try to evaluate the log-likelihood. This is not straight-forward, but possible [2].
You can use the ELBO as a conservative estimate of the log-likelihood. It therefore makes sense to use reconstruction error plus KL term as your validation_loss.
Ask yourself why you are training a variational autoencoder (VAE). If you can answer this question, the right way to evaluate (and train) your model will become much clearer. Is the reconstruction error important for your application? If yes, then reconstruction error would be okay to use for validation, but then I would question why you are optimizing for log-likelihood.
[1] Kingma & Welling, Auto-Encoding Variational Bayes, 2014 (Equation 2)
[2] Wu et al., On the Quantitative Analysis of Decoder Based Generative Models, 2017 | Variational Autoencoder and validation loss | The short answer is: Don't drop the KL term.
The reconstruction error plus KL term optimized by a VAE is a lower bound on the log-likelihood (also called the "evidence lower bound", or ELBO) [1]. Log | Variational Autoencoder and validation loss
The short answer is: Don't drop the KL term.
The reconstruction error plus KL term optimized by a VAE is a lower bound on the log-likelihood (also called the "evidence lower bound", or ELBO) [1]. Log-likelihood is one way to measure how well your model explains the data. If that's what you're after, it makes sense to try to evaluate the log-likelihood. This is not straight-forward, but possible [2].
You can use the ELBO as a conservative estimate of the log-likelihood. It therefore makes sense to use reconstruction error plus KL term as your validation_loss.
Ask yourself why you are training a variational autoencoder (VAE). If you can answer this question, the right way to evaluate (and train) your model will become much clearer. Is the reconstruction error important for your application? If yes, then reconstruction error would be okay to use for validation, but then I would question why you are optimizing for log-likelihood.
[1] Kingma & Welling, Auto-Encoding Variational Bayes, 2014 (Equation 2)
[2] Wu et al., On the Quantitative Analysis of Decoder Based Generative Models, 2017 | Variational Autoencoder and validation loss
The short answer is: Don't drop the KL term.
The reconstruction error plus KL term optimized by a VAE is a lower bound on the log-likelihood (also called the "evidence lower bound", or ELBO) [1]. Log |
55,834 | Variational Autoencoder and validation loss | If you ignore the regularization part (KL divergence ) you will not be able to compare it with the train loss.
It is true that regularization is added to better optimize the parameters of the model and not for better approximation of the loss function.
You could add 'mse' metric for this,
Model.compile(...,metric=['mse'])
You can have look at this keras issues page for more detailed discussion. | Variational Autoencoder and validation loss | If you ignore the regularization part (KL divergence ) you will not be able to compare it with the train loss.
It is true that regularization is added to better optimize the parameters of the model a | Variational Autoencoder and validation loss
If you ignore the regularization part (KL divergence ) you will not be able to compare it with the train loss.
It is true that regularization is added to better optimize the parameters of the model and not for better approximation of the loss function.
You could add 'mse' metric for this,
Model.compile(...,metric=['mse'])
You can have look at this keras issues page for more detailed discussion. | Variational Autoencoder and validation loss
If you ignore the regularization part (KL divergence ) you will not be able to compare it with the train loss.
It is true that regularization is added to better optimize the parameters of the model a |
55,835 | Where does the Kullback-Leibler come from? [duplicate] | KL divergence is very closely related to entropy. It helps to understand the motivation behind the expression for entropy.
For some distribution $p(x)$ we call $h(x) = -\log p(x)$ the information received by observing variable $x$. It's called "information" because it has the nice property that if we observe two completely independent events, $x$ and $y$, then the information gained is the sum of the information gained from each event. I.e. if $x$ and $y$ are independent, $p(x,y) = p(x) p(y),$ then $h(x,y) = -\log p(x) - \log p(y) = h(x) + h(y).$ Likewise, if $x$ and $y$ are completely mutually occurring, then it follows that observing $x$ and $y$ simply gives us the exact same information as observing $x$ alone. That is $p(x,y) = p(x) \implies h(x,y) = h(x).$ Also, this form of $h(x)$ ensures that the information gained is always positive, and more information is gained by observing events that are less likely.
To that end, entropy is the expected amount of information gained by observing a variable drawn from a distribution $p$. That is,
$$
H[p] = E[h_p(x)] = \int p(x) h_p(x) dx = - \int p(x) \log p(x) dx,
$$
where I'm calling $h_p(x)$ the information gained from observing $x$ if $x$ is drawn from distribution $p.$
The KL divergence is often used in situations where $p$ is some unknown true distribution, and $q$ is a proxy distribution that we're using to estimate $p.$ $KL(p \mid \mid q)$ is the expected difference in information received by observing $x$ if $q$ was the true distribution, vs if $p$ was the true distribution, and that expectation is taken over a single distribution. Another way of saying it is that it's the expected additional information we need to receive from observing $x$ in order to get all the information we need.
Suppose you couldn't draw from $p(x)$ but could only evaluate it for a given $x$ (or, at the very least, you could evaluate the ratio $p(x)/q(x)$). But you could draw from your proxy distribution $q(x),$ and you want to learn $p.$ Then the expected deficit of information you gain (or the expected additional information you need to gain in order to learn $p$) is
$$
KL(q \mid \mid p) = E_q[h_p(x) - h_q(x)] = \int q(x) [h_p(x) - h_q(x)] dx = \int q(x) \log \frac{q(x)}{p(x)} dx.
$$
I gave an intuitive interpretation of the $KL$ divergence, but one thing about this form is that it's guaranteed to be non-negative. Even though individual instances of $q(x) \log \frac{q(x)}{p(x)}$ might be negative, the instances where $q>p$ will outweigh the others. (More explicitly, the Gibbs' inequality guarantees that it's non-negative.) This means that, as long as $p(x)$ is the true distribution you will always need more information if you're using a proxy, $q(x),$ to draw from. By contrast, your suggestion of using $H[q] - H[p]$ has no such guarantee. | Where does the Kullback-Leibler come from? [duplicate] | KL divergence is very closely related to entropy. It helps to understand the motivation behind the expression for entropy.
For some distribution $p(x)$ we call $h(x) = -\log p(x)$ the information rece | Where does the Kullback-Leibler come from? [duplicate]
KL divergence is very closely related to entropy. It helps to understand the motivation behind the expression for entropy.
For some distribution $p(x)$ we call $h(x) = -\log p(x)$ the information received by observing variable $x$. It's called "information" because it has the nice property that if we observe two completely independent events, $x$ and $y$, then the information gained is the sum of the information gained from each event. I.e. if $x$ and $y$ are independent, $p(x,y) = p(x) p(y),$ then $h(x,y) = -\log p(x) - \log p(y) = h(x) + h(y).$ Likewise, if $x$ and $y$ are completely mutually occurring, then it follows that observing $x$ and $y$ simply gives us the exact same information as observing $x$ alone. That is $p(x,y) = p(x) \implies h(x,y) = h(x).$ Also, this form of $h(x)$ ensures that the information gained is always positive, and more information is gained by observing events that are less likely.
To that end, entropy is the expected amount of information gained by observing a variable drawn from a distribution $p$. That is,
$$
H[p] = E[h_p(x)] = \int p(x) h_p(x) dx = - \int p(x) \log p(x) dx,
$$
where I'm calling $h_p(x)$ the information gained from observing $x$ if $x$ is drawn from distribution $p.$
The KL divergence is often used in situations where $p$ is some unknown true distribution, and $q$ is a proxy distribution that we're using to estimate $p.$ $KL(p \mid \mid q)$ is the expected difference in information received by observing $x$ if $q$ was the true distribution, vs if $p$ was the true distribution, and that expectation is taken over a single distribution. Another way of saying it is that it's the expected additional information we need to receive from observing $x$ in order to get all the information we need.
Suppose you couldn't draw from $p(x)$ but could only evaluate it for a given $x$ (or, at the very least, you could evaluate the ratio $p(x)/q(x)$). But you could draw from your proxy distribution $q(x),$ and you want to learn $p.$ Then the expected deficit of information you gain (or the expected additional information you need to gain in order to learn $p$) is
$$
KL(q \mid \mid p) = E_q[h_p(x) - h_q(x)] = \int q(x) [h_p(x) - h_q(x)] dx = \int q(x) \log \frac{q(x)}{p(x)} dx.
$$
I gave an intuitive interpretation of the $KL$ divergence, but one thing about this form is that it's guaranteed to be non-negative. Even though individual instances of $q(x) \log \frac{q(x)}{p(x)}$ might be negative, the instances where $q>p$ will outweigh the others. (More explicitly, the Gibbs' inequality guarantees that it's non-negative.) This means that, as long as $p(x)$ is the true distribution you will always need more information if you're using a proxy, $q(x),$ to draw from. By contrast, your suggestion of using $H[q] - H[p]$ has no such guarantee. | Where does the Kullback-Leibler come from? [duplicate]
KL divergence is very closely related to entropy. It helps to understand the motivation behind the expression for entropy.
For some distribution $p(x)$ we call $h(x) = -\log p(x)$ the information rece |
55,836 | hidden markov model with multiple factors | Important References
I would strongly suggest you to check the Bayesian Network or probabilistic graphical model literature, which can answer your question perfectly.
If you have limited time, this page by Kevin Murphy, A Brief Introduction to Graphical Models and Bayesian Networks is a good start. The page gives basic ideas of inference and learning from data. In section Temporal models, it gives different forms of HMM. Here are some examples.
My answers
From the way you described your question, I assume you were thinking to represent HMM in "state diagram". where each node represents a state, and the links represent transition probabilities.
I am going to use a different notation here: we use graphical model to represent Hidden Markov Model as in following figure. In your example, this model represent we have $N$ data points on both hidden state $X$ ("temperature"), and observed data $Y$ ("ring sizes"). The arrow in the graph represents the conditional dependencies.
$P(X_i|X_{i-1})$ is the "state transition", $2 \times 2$ matrix.
$P(Y_i|X_i)$ is the "emission probability", $2 \times 3$ matrix.
Note, this diagram gives us the conditional dependency assumptions, i.e., it gives us the join distribution as follows
$$
\begin{align*}
P(\mathbf X,\mathbf Y) & =\left( P(X_1)\prod_{i=1}^{N-1} P(X_{i+1}|X_{i}) \right) \left( \prod_{i=1}^N P(Y_i|X_{i}) \right)\\
\end{align*}
$$
And the joint distribution is parameterized by $2+4+6=12$ parameters, which are $P(X_1)$ a $1\times 2$ matrix, $P(X_i|X_{i-1})$ a $2\times 2$ matrix, $P(Y_i|X_i)$ a $2\times 3$ matrix.
When there are more than one "factors" generated by hidden stages, we can change the diagram into
Note, the formula of joint distribution will be changed into
$$
\begin{align*}
P(\mathbf X,\mathbf Y, \mathbf Z) & =\left( P(X_1)\prod_{i=1}^{N-1} P(X_{i+1}|X_{i}) \right) \left( \prod_{i=1}^N P(Y_i|X_{i}) \right)\left( \prod_{i=1}^N P(Z_i|X_{i}) \right)\\
\end{align*}
$$
Note, now the model has more parameters in $P(Z_i|X_i)$.
In addition, when there are more than one "factors" in hidden state to affect observations, we can use this diagram to represent. (the given model is only ONE example, one can edit it to reflect different dependency assumptions. For example, adding links on $Z_i$.)
The are not classical HMM but a general directed model. Different names, e.g., Auto regressive HMM, Input-output HMM Coupled HMM Factorial HMM etc., of the model can be found in Murphy's tutorial page mentioned earlier.
For general directed probabilistic graphical model, we still can learn the model from data, and once we have the model, we can run "inference" to predict.
The book Bayesian Network in R is a good start. | hidden markov model with multiple factors | Important References
I would strongly suggest you to check the Bayesian Network or probabilistic graphical model literature, which can answer your question perfectly.
If you have limited time, this | hidden markov model with multiple factors
Important References
I would strongly suggest you to check the Bayesian Network or probabilistic graphical model literature, which can answer your question perfectly.
If you have limited time, this page by Kevin Murphy, A Brief Introduction to Graphical Models and Bayesian Networks is a good start. The page gives basic ideas of inference and learning from data. In section Temporal models, it gives different forms of HMM. Here are some examples.
My answers
From the way you described your question, I assume you were thinking to represent HMM in "state diagram". where each node represents a state, and the links represent transition probabilities.
I am going to use a different notation here: we use graphical model to represent Hidden Markov Model as in following figure. In your example, this model represent we have $N$ data points on both hidden state $X$ ("temperature"), and observed data $Y$ ("ring sizes"). The arrow in the graph represents the conditional dependencies.
$P(X_i|X_{i-1})$ is the "state transition", $2 \times 2$ matrix.
$P(Y_i|X_i)$ is the "emission probability", $2 \times 3$ matrix.
Note, this diagram gives us the conditional dependency assumptions, i.e., it gives us the join distribution as follows
$$
\begin{align*}
P(\mathbf X,\mathbf Y) & =\left( P(X_1)\prod_{i=1}^{N-1} P(X_{i+1}|X_{i}) \right) \left( \prod_{i=1}^N P(Y_i|X_{i}) \right)\\
\end{align*}
$$
And the joint distribution is parameterized by $2+4+6=12$ parameters, which are $P(X_1)$ a $1\times 2$ matrix, $P(X_i|X_{i-1})$ a $2\times 2$ matrix, $P(Y_i|X_i)$ a $2\times 3$ matrix.
When there are more than one "factors" generated by hidden stages, we can change the diagram into
Note, the formula of joint distribution will be changed into
$$
\begin{align*}
P(\mathbf X,\mathbf Y, \mathbf Z) & =\left( P(X_1)\prod_{i=1}^{N-1} P(X_{i+1}|X_{i}) \right) \left( \prod_{i=1}^N P(Y_i|X_{i}) \right)\left( \prod_{i=1}^N P(Z_i|X_{i}) \right)\\
\end{align*}
$$
Note, now the model has more parameters in $P(Z_i|X_i)$.
In addition, when there are more than one "factors" in hidden state to affect observations, we can use this diagram to represent. (the given model is only ONE example, one can edit it to reflect different dependency assumptions. For example, adding links on $Z_i$.)
The are not classical HMM but a general directed model. Different names, e.g., Auto regressive HMM, Input-output HMM Coupled HMM Factorial HMM etc., of the model can be found in Murphy's tutorial page mentioned earlier.
For general directed probabilistic graphical model, we still can learn the model from data, and once we have the model, we can run "inference" to predict.
The book Bayesian Network in R is a good start. | hidden markov model with multiple factors
Important References
I would strongly suggest you to check the Bayesian Network or probabilistic graphical model literature, which can answer your question perfectly.
If you have limited time, this |
55,837 | Use continuous variables or buckets in neural net? | The non-linearity you are concerned about can be effectively handled by neural nets. That is one of the key points with using them instead of a linear model. A neural net can , at least theoretically, approximate any continuous function. It is called the Universal approximation theorem. Of course it might still be hard to learn but in practice it generally works quite well even if you don't find the optimal solution.
So, in short. No, you do not need to split the features into buckets.
Example
I'll show the non-linear problem by example.
Here is a linear dataset with two continuous features and one continuous output (the color of the dots). Hence a regression problem similar to the housing price example but more obvious.
I've trained a linear model on the data which shows as the shade behind it.
This obviously works really well in the linear case (left). But if we try to train a linear model on a non linear dataset the output doesn't look as good (right).
As you would expect it is not possible for the model to capture the relationship in the data. This is where you could resort to binning the data into buckets. Effectively discretizing the predictions into squares in the input space. Or if you want more continuity you can use splines for these but looking at this set you might expect that this can be quite tricky as the pattern is dependent on both features. You can easily imagine more complex structures in more high dimensional problems.
Another approach to solve the non-linearity is to add some hidden neurons to your linear model making it a neural net. Adding 3 hidden neurons will give you the following non-linear output (left) and adding another layer and a few more neurons gives you an even more accurate solution (right).
The example images are generated here: http://playground.tensorflow.org. It's a great place to play around with neural nets and see what effects different parameters will have on the training and result. | Use continuous variables or buckets in neural net? | The non-linearity you are concerned about can be effectively handled by neural nets. That is one of the key points with using them instead of a linear model. A neural net can , at least theoretically, | Use continuous variables or buckets in neural net?
The non-linearity you are concerned about can be effectively handled by neural nets. That is one of the key points with using them instead of a linear model. A neural net can , at least theoretically, approximate any continuous function. It is called the Universal approximation theorem. Of course it might still be hard to learn but in practice it generally works quite well even if you don't find the optimal solution.
So, in short. No, you do not need to split the features into buckets.
Example
I'll show the non-linear problem by example.
Here is a linear dataset with two continuous features and one continuous output (the color of the dots). Hence a regression problem similar to the housing price example but more obvious.
I've trained a linear model on the data which shows as the shade behind it.
This obviously works really well in the linear case (left). But if we try to train a linear model on a non linear dataset the output doesn't look as good (right).
As you would expect it is not possible for the model to capture the relationship in the data. This is where you could resort to binning the data into buckets. Effectively discretizing the predictions into squares in the input space. Or if you want more continuity you can use splines for these but looking at this set you might expect that this can be quite tricky as the pattern is dependent on both features. You can easily imagine more complex structures in more high dimensional problems.
Another approach to solve the non-linearity is to add some hidden neurons to your linear model making it a neural net. Adding 3 hidden neurons will give you the following non-linear output (left) and adding another layer and a few more neurons gives you an even more accurate solution (right).
The example images are generated here: http://playground.tensorflow.org. It's a great place to play around with neural nets and see what effects different parameters will have on the training and result. | Use continuous variables or buckets in neural net?
The non-linearity you are concerned about can be effectively handled by neural nets. That is one of the key points with using them instead of a linear model. A neural net can , at least theoretically, |
55,838 | Use continuous variables or buckets in neural net? | There are probably no principled ways to determine when to create buckets or use the value as continuous like the 'age' feature, since the predictiveness of age in different tasks vary a lot.
Trial and error is always good if having enough time and computation resources. If not, manually decide how many buckets to create or how many ways of bucket creation to experiment based on intuition is usually good-performing if confident that the bucket creation well reflects experience and knowledge regarding this feature. | Use continuous variables or buckets in neural net? | There are probably no principled ways to determine when to create buckets or use the value as continuous like the 'age' feature, since the predictiveness of age in different tasks vary a lot.
Trial a | Use continuous variables or buckets in neural net?
There are probably no principled ways to determine when to create buckets or use the value as continuous like the 'age' feature, since the predictiveness of age in different tasks vary a lot.
Trial and error is always good if having enough time and computation resources. If not, manually decide how many buckets to create or how many ways of bucket creation to experiment based on intuition is usually good-performing if confident that the bucket creation well reflects experience and knowledge regarding this feature. | Use continuous variables or buckets in neural net?
There are probably no principled ways to determine when to create buckets or use the value as continuous like the 'age' feature, since the predictiveness of age in different tasks vary a lot.
Trial a |
55,839 | factor loadings = eigenvectors in R output? | However, my understanding is that loadings are computed as the product
of the eigenvector and the square root of the eigenvalue.
I depends on definition of loading you use. In princomp loadings are simply coefficients of principal components (recall that principal components are linear combinations of original variables) that are equal to eigenvectors entries. This has one inconvenience: since variance of each PC equals corresponding eigenvaule, loadings defined this way are not correlations between PC's and original variables. Correction by square root of eigenvalue is done to standardize the variance of PC scores to 1 and therefore to allow for correlation interpretation of loadings. These standardized loadings are sometimes called loadings as well. See for example PCA function from FactoMineR package. It never uses a word loadings, it uses word coordinates for standardized loadings.
Not only do the cumulative and proportion variance not match the initial output
loadings function doesn't give you cumulative and proportion variance. It just gives you sum of squares of each PC's loadings. And this, by definition, is 1. So, you'll always see this kind of output. It sounds ridicullus but works well when you apply loadings function to Explanatory Factor Analysis. In PCA, second part of loadings output is simply useless.
the claim that the first component captures 66% of the variance is
impossible with these loading values, because every single variable in
the data set (A-F) has a later component with a higher (absolute)
loading
Actually it is possible, since loadings here are just eigenvectors not standardized loadings. | factor loadings = eigenvectors in R output? | However, my understanding is that loadings are computed as the product
of the eigenvector and the square root of the eigenvalue.
I depends on definition of loading you use. In princomp loadings are | factor loadings = eigenvectors in R output?
However, my understanding is that loadings are computed as the product
of the eigenvector and the square root of the eigenvalue.
I depends on definition of loading you use. In princomp loadings are simply coefficients of principal components (recall that principal components are linear combinations of original variables) that are equal to eigenvectors entries. This has one inconvenience: since variance of each PC equals corresponding eigenvaule, loadings defined this way are not correlations between PC's and original variables. Correction by square root of eigenvalue is done to standardize the variance of PC scores to 1 and therefore to allow for correlation interpretation of loadings. These standardized loadings are sometimes called loadings as well. See for example PCA function from FactoMineR package. It never uses a word loadings, it uses word coordinates for standardized loadings.
Not only do the cumulative and proportion variance not match the initial output
loadings function doesn't give you cumulative and proportion variance. It just gives you sum of squares of each PC's loadings. And this, by definition, is 1. So, you'll always see this kind of output. It sounds ridicullus but works well when you apply loadings function to Explanatory Factor Analysis. In PCA, second part of loadings output is simply useless.
the claim that the first component captures 66% of the variance is
impossible with these loading values, because every single variable in
the data set (A-F) has a later component with a higher (absolute)
loading
Actually it is possible, since loadings here are just eigenvectors not standardized loadings. | factor loadings = eigenvectors in R output?
However, my understanding is that loadings are computed as the product
of the eigenvector and the square root of the eigenvalue.
I depends on definition of loading you use. In princomp loadings are |
55,840 | ML estimator for chi square distribution | The problem is that you haven't written down the correct likelihood.
Suppose $X$ is a positive multiple $\theta$ ($=\sigma^2$) of a variable $Y$ with distribution function $F_Y$ and density $f_Y$. To find the density of $X$ itself, resort to the definition of the distribution function:
$$F_X(x) = \Pr(X\le x) = \Pr(\theta Y \le x) = \Pr(Y \le x/\theta) = F_Y\left(\frac{x}{\theta}\right).$$
The density of $X$ therefore is
$$f_X(x) = \frac{d}{dx} F_X(x) = \frac{d}{dx} F_Y\left(\frac{x}{\theta}\right)=\frac{1}{\theta}f_Y\left(\frac{x}{\theta}\right).$$
You mustn't forget the factor of $1/\theta$. Let's see how it works out.
Suppose we observe $X=x$. (In the application, this observation is the statistic $x = \sum_{i=1}^n (y_i-\bar y)^2$ from $n$ iid Normal variates $Y_i$ with realizations $y_i$.) As usual, let's minimize the likelihood by differentiating the logarithm and setting that to zero:
$$0 = \frac{d}{d\theta}\log\left(\frac{1}{\theta} f_Y\left(\frac{x}{\theta}\right)\right)=-\frac{1}{\theta} - \frac{x}{\theta^2} \left(\log f_Y\right)^\prime\left(\frac{x}{\theta}\right).\tag{1}$$
For a $\chi^2(n-1)$ distribution, $$\log(f_Y(y)) = C + \frac{n-3}{2}\log(y) - \frac{y}{2}$$ where $C$ does not depend on $y$. Its derivative is $$(\log f_Y)^\prime(y) = \frac{n-3}{2y} - \frac{1}{2}.\tag{2}$$
Plugging $y=x/\theta$ into $(2)$ and evaluating $(1)$ produces
$$0 = -\frac{1}{\theta} - \frac{x}{\theta^2}\left( \frac{n-3}{2x/\theta} - \frac{1}{2}\right)$$
with the unique solution $$\hat\theta = \frac{x}{n-1} = \frac{\sum_{i=1}^n(y_i-\bar y)^2}{n-1},$$ as claimed. | ML estimator for chi square distribution | The problem is that you haven't written down the correct likelihood.
Suppose $X$ is a positive multiple $\theta$ ($=\sigma^2$) of a variable $Y$ with distribution function $F_Y$ and density $f_Y$. To | ML estimator for chi square distribution
The problem is that you haven't written down the correct likelihood.
Suppose $X$ is a positive multiple $\theta$ ($=\sigma^2$) of a variable $Y$ with distribution function $F_Y$ and density $f_Y$. To find the density of $X$ itself, resort to the definition of the distribution function:
$$F_X(x) = \Pr(X\le x) = \Pr(\theta Y \le x) = \Pr(Y \le x/\theta) = F_Y\left(\frac{x}{\theta}\right).$$
The density of $X$ therefore is
$$f_X(x) = \frac{d}{dx} F_X(x) = \frac{d}{dx} F_Y\left(\frac{x}{\theta}\right)=\frac{1}{\theta}f_Y\left(\frac{x}{\theta}\right).$$
You mustn't forget the factor of $1/\theta$. Let's see how it works out.
Suppose we observe $X=x$. (In the application, this observation is the statistic $x = \sum_{i=1}^n (y_i-\bar y)^2$ from $n$ iid Normal variates $Y_i$ with realizations $y_i$.) As usual, let's minimize the likelihood by differentiating the logarithm and setting that to zero:
$$0 = \frac{d}{d\theta}\log\left(\frac{1}{\theta} f_Y\left(\frac{x}{\theta}\right)\right)=-\frac{1}{\theta} - \frac{x}{\theta^2} \left(\log f_Y\right)^\prime\left(\frac{x}{\theta}\right).\tag{1}$$
For a $\chi^2(n-1)$ distribution, $$\log(f_Y(y)) = C + \frac{n-3}{2}\log(y) - \frac{y}{2}$$ where $C$ does not depend on $y$. Its derivative is $$(\log f_Y)^\prime(y) = \frac{n-3}{2y} - \frac{1}{2}.\tag{2}$$
Plugging $y=x/\theta$ into $(2)$ and evaluating $(1)$ produces
$$0 = -\frac{1}{\theta} - \frac{x}{\theta^2}\left( \frac{n-3}{2x/\theta} - \frac{1}{2}\right)$$
with the unique solution $$\hat\theta = \frac{x}{n-1} = \frac{\sum_{i=1}^n(y_i-\bar y)^2}{n-1},$$ as claimed. | ML estimator for chi square distribution
The problem is that you haven't written down the correct likelihood.
Suppose $X$ is a positive multiple $\theta$ ($=\sigma^2$) of a variable $Y$ with distribution function $F_Y$ and density $f_Y$. To |
55,841 | Expectation of the absolute difference of two i.i.d Normal distributions | If $X$ and $Y$ are independent normal random variables, then $X - Y$ is normal.
$$ X - Y \sim Normal(0, \sqrt{2}) $$
From here, the expectation of the absolute value of a standard normal is:
$$ E \left[ | X | \right] = \sqrt{\frac{2}{\pi}} $$
So for the difference
$$ E \left[ | X - Y | \right] = \sqrt{2} E \left[ | X | \right] = \sqrt{\frac{4}{\pi}} = \frac{2}{\sqrt{\pi}} $$
In R, from a simulation
> X <- rnorm(10000)
> Y <- rnorm(10000)
> Z <- mean(abs(X - Y))
> Z
[1] 1.124015
And numerically
> 2/sqrt(pi)
[1] 1.128379 | Expectation of the absolute difference of two i.i.d Normal distributions | If $X$ and $Y$ are independent normal random variables, then $X - Y$ is normal.
$$ X - Y \sim Normal(0, \sqrt{2}) $$
From here, the expectation of the absolute value of a standard normal is:
$$ E \l | Expectation of the absolute difference of two i.i.d Normal distributions
If $X$ and $Y$ are independent normal random variables, then $X - Y$ is normal.
$$ X - Y \sim Normal(0, \sqrt{2}) $$
From here, the expectation of the absolute value of a standard normal is:
$$ E \left[ | X | \right] = \sqrt{\frac{2}{\pi}} $$
So for the difference
$$ E \left[ | X - Y | \right] = \sqrt{2} E \left[ | X | \right] = \sqrt{\frac{4}{\pi}} = \frac{2}{\sqrt{\pi}} $$
In R, from a simulation
> X <- rnorm(10000)
> Y <- rnorm(10000)
> Z <- mean(abs(X - Y))
> Z
[1] 1.124015
And numerically
> 2/sqrt(pi)
[1] 1.128379 | Expectation of the absolute difference of two i.i.d Normal distributions
If $X$ and $Y$ are independent normal random variables, then $X - Y$ is normal.
$$ X - Y \sim Normal(0, \sqrt{2}) $$
From here, the expectation of the absolute value of a standard normal is:
$$ E \l |
55,842 | The Defective Lock Problem | This problem, as I have interpreted it, has an easy one-line solution. This answer states my interpretation, outlines the background theory, presents the solution, gives a worked example, and appends an alternative solution (also a one-liner) for those familiar with generating functions.
Statement of the problem
Let's be as clear as possible about the problem. The settings on the "dials" may be construed as a vector $\mathbf{d}=(d_1,d_2, \ldots, d_D)$ where each $d_i$ is set to one of $S$ possible values, which we might as well designate $0, 1, \ldots, S-1$, as in the question. Ordinarily, a unique setting $\mathbf{d}$ "opens" the lock. The "defect" is that any permutation of this setting opens the lock too. How many permutations are there?
The answer depends on $\mathbf d$. This is apparent even in the simplest case $D=2,S=2$. The question notes there are two permutations of $\mathbf{d}=(0,1)$. However, there is only one permutation of $(1,1)$ and only one permutation of $(0,0)$. Evidently, the presence of repeat values in $\mathbf d$ reduces the number of (distinguishable) permutations.
The well-known Orbit-Stabilizer Theorem provides a quick answer. To see how it applies, let's examine the (obvious) group action. After that's out of the way, I will quote the theorem and apply it.
The group action
The symmetric group $\mathfrak{S}_D$ acts on the set of all these vectors, $S^D$, by permuting their components. For $\sigma \in \mathfrak{S}_D$ and $\mathbf {d}\in S^D$ let's write $\mathbf{d}^\sigma$ for the result of applying $\sigma$ to $\mathbf d$. The permutations of $\mathbf{d}$ form its orbit,
$$\mathfrak{S}_D \cdot \mathbf{d} = \{\mathbf{d}^\sigma\mid \sigma\in\mathfrak{S}_D\}.$$
Some of these permutations fix $\mathbf d$. This is the stabilizer subgroup
$$\left(\mathfrak{S}_D\right)_\mathbf{d} = \{\sigma\in\mathfrak{S}_D\mid \mathbf{d}^\sigma=\mathbf{d}\}.$$
The stabilizer is easy to characterize: for each possible setting $s\in S$, consider the components of $\mathbf d$ that equal $s$. (There may be none.) All permutations that move these components among themselves obviously leave $\mathbf d$ unchanged and any permutation that leaves $\mathbf d$ unchanged must be of this nature. Writing $n(s,\mathbf d)$ for the number of components of $\mathbf d$ equal to $s$, we see that the stabilizer of $\mathbf d$ is isomorphic to the product of the permutation groups on $n(s,\mathbf d)$ elements:
$$\left(\mathfrak{S}_D\right)_\mathbf{d} \cong \prod_{s\in S} \mathfrak{S}_{n(s,\mathbf d)}\tag{1}.$$
The Orbit-Stabilizer theorem asserts the number of elements in the orbit of $\mathbf d$ is the ratio of the sizes of the two groups in question: the "big group" $\mathfrak{S}_D$ and the stabilizer subgroup:
$$\vert \mathfrak{S}_D \cdot \mathbf{d} \vert = \frac{\vert \mathfrak{S}_D\vert}{\vert \left(\mathfrak{S}_D\right)_\mathbf{d} \vert} = \frac{\vert \mathfrak{S}_D\vert}{\prod_{s\in S} \vert \mathfrak{S}_{n(s,\mathbf d)}\vert }\tag{2}.$$
The second equality follows from $(1)$. On the left is what we want to find.
The solution
Since the number of elements in any symmetric group $\mathfrak{S}_n$ is $n!$ (this often is taken as a definition of the factorial $n!$), we may immediately write down the solution from $(2)$ as
$$\vert \mathfrak{S}_D \cdot \mathbf{d} \vert = \frac{D!}{\prod_{s\in S}n(s,\mathbf d)!} = \binom{D}{n(0, \mathbf{d}), n(1, \mathbf{d}), \ldots, n(s-1, \mathbf{d})}.$$
This is the multinomial coefficient determined by $\mathbf d$.
Example
Let $\mathbf{d} = (0,3,1,0)$ with $S=\{0,1,2,3,4\}$. Counting,
$$\eqalign{n(0,\mathbf{d})=&2 \\ n(1,\mathbf{d})=&1 \\ n(2,\mathbf{d})=&0 \\ n(3,\mathbf{d})=&1 \\ n(4,\mathbf{d})=&0.}$$
Since $D =4$, the answer is
$$\vert \mathfrak{S}_D \cdot \mathbf{(0,1,3,0)} \vert = \binom{4}{2,1,0,1,0} = \frac{4!}{2!1!0!1!0!}= \frac{24}{2}=12.$$
Indeed, the twelve equivalent permutations of $\mathbf d$ are (in an abbreviated notation, dropping parentheses and commas)
$$\eqalign{\mathfrak{S}_4 \cdot 0130=\{ &0013,0031,0103,0130,0301,0310,\\&1003,1030,1300,3001,3010,3100\ \}.}$$
Alternative solution
Having seen the appearance of the multinomial coefficients, it becomes obvious the answer must be the coefficient of $\mathbf{x}^\mathbf{d} = x_0^{d_0}x_1^{d_1}\cdots x_{s-1}^{d_{s-1}}$ in the multinomial expansion of $$\left(\mathbf{x}\cdot \mathbf{1}\right)^D = (x_0+x_1+\cdots + x_{s-1})^D = \\\sum_{n_0+n_1+\cdots+n_{s-1}=D}\binom{D}{n_0,n_1,\ldots, n_{s-1}}x_0^{n_0}x_1^{n_1}\cdots x_{s-1}^{n_{s-1}},$$
because it counts the number of distinct ways of collecting $n(0,\mathbf d)$ zeros, $n(1,\mathbf {d})$ ones, etc, from the terms in the product and these enumerate all the distinguishable permutations of $\mathbf{d}$. In the case of two dials these coefficients obviously are the Binomial coefficients, as discovered in the edits to the question. | The Defective Lock Problem | This problem, as I have interpreted it, has an easy one-line solution. This answer states my interpretation, outlines the background theory, presents the solution, gives a worked example, and appends | The Defective Lock Problem
This problem, as I have interpreted it, has an easy one-line solution. This answer states my interpretation, outlines the background theory, presents the solution, gives a worked example, and appends an alternative solution (also a one-liner) for those familiar with generating functions.
Statement of the problem
Let's be as clear as possible about the problem. The settings on the "dials" may be construed as a vector $\mathbf{d}=(d_1,d_2, \ldots, d_D)$ where each $d_i$ is set to one of $S$ possible values, which we might as well designate $0, 1, \ldots, S-1$, as in the question. Ordinarily, a unique setting $\mathbf{d}$ "opens" the lock. The "defect" is that any permutation of this setting opens the lock too. How many permutations are there?
The answer depends on $\mathbf d$. This is apparent even in the simplest case $D=2,S=2$. The question notes there are two permutations of $\mathbf{d}=(0,1)$. However, there is only one permutation of $(1,1)$ and only one permutation of $(0,0)$. Evidently, the presence of repeat values in $\mathbf d$ reduces the number of (distinguishable) permutations.
The well-known Orbit-Stabilizer Theorem provides a quick answer. To see how it applies, let's examine the (obvious) group action. After that's out of the way, I will quote the theorem and apply it.
The group action
The symmetric group $\mathfrak{S}_D$ acts on the set of all these vectors, $S^D$, by permuting their components. For $\sigma \in \mathfrak{S}_D$ and $\mathbf {d}\in S^D$ let's write $\mathbf{d}^\sigma$ for the result of applying $\sigma$ to $\mathbf d$. The permutations of $\mathbf{d}$ form its orbit,
$$\mathfrak{S}_D \cdot \mathbf{d} = \{\mathbf{d}^\sigma\mid \sigma\in\mathfrak{S}_D\}.$$
Some of these permutations fix $\mathbf d$. This is the stabilizer subgroup
$$\left(\mathfrak{S}_D\right)_\mathbf{d} = \{\sigma\in\mathfrak{S}_D\mid \mathbf{d}^\sigma=\mathbf{d}\}.$$
The stabilizer is easy to characterize: for each possible setting $s\in S$, consider the components of $\mathbf d$ that equal $s$. (There may be none.) All permutations that move these components among themselves obviously leave $\mathbf d$ unchanged and any permutation that leaves $\mathbf d$ unchanged must be of this nature. Writing $n(s,\mathbf d)$ for the number of components of $\mathbf d$ equal to $s$, we see that the stabilizer of $\mathbf d$ is isomorphic to the product of the permutation groups on $n(s,\mathbf d)$ elements:
$$\left(\mathfrak{S}_D\right)_\mathbf{d} \cong \prod_{s\in S} \mathfrak{S}_{n(s,\mathbf d)}\tag{1}.$$
The Orbit-Stabilizer theorem asserts the number of elements in the orbit of $\mathbf d$ is the ratio of the sizes of the two groups in question: the "big group" $\mathfrak{S}_D$ and the stabilizer subgroup:
$$\vert \mathfrak{S}_D \cdot \mathbf{d} \vert = \frac{\vert \mathfrak{S}_D\vert}{\vert \left(\mathfrak{S}_D\right)_\mathbf{d} \vert} = \frac{\vert \mathfrak{S}_D\vert}{\prod_{s\in S} \vert \mathfrak{S}_{n(s,\mathbf d)}\vert }\tag{2}.$$
The second equality follows from $(1)$. On the left is what we want to find.
The solution
Since the number of elements in any symmetric group $\mathfrak{S}_n$ is $n!$ (this often is taken as a definition of the factorial $n!$), we may immediately write down the solution from $(2)$ as
$$\vert \mathfrak{S}_D \cdot \mathbf{d} \vert = \frac{D!}{\prod_{s\in S}n(s,\mathbf d)!} = \binom{D}{n(0, \mathbf{d}), n(1, \mathbf{d}), \ldots, n(s-1, \mathbf{d})}.$$
This is the multinomial coefficient determined by $\mathbf d$.
Example
Let $\mathbf{d} = (0,3,1,0)$ with $S=\{0,1,2,3,4\}$. Counting,
$$\eqalign{n(0,\mathbf{d})=&2 \\ n(1,\mathbf{d})=&1 \\ n(2,\mathbf{d})=&0 \\ n(3,\mathbf{d})=&1 \\ n(4,\mathbf{d})=&0.}$$
Since $D =4$, the answer is
$$\vert \mathfrak{S}_D \cdot \mathbf{(0,1,3,0)} \vert = \binom{4}{2,1,0,1,0} = \frac{4!}{2!1!0!1!0!}= \frac{24}{2}=12.$$
Indeed, the twelve equivalent permutations of $\mathbf d$ are (in an abbreviated notation, dropping parentheses and commas)
$$\eqalign{\mathfrak{S}_4 \cdot 0130=\{ &0013,0031,0103,0130,0301,0310,\\&1003,1030,1300,3001,3010,3100\ \}.}$$
Alternative solution
Having seen the appearance of the multinomial coefficients, it becomes obvious the answer must be the coefficient of $\mathbf{x}^\mathbf{d} = x_0^{d_0}x_1^{d_1}\cdots x_{s-1}^{d_{s-1}}$ in the multinomial expansion of $$\left(\mathbf{x}\cdot \mathbf{1}\right)^D = (x_0+x_1+\cdots + x_{s-1})^D = \\\sum_{n_0+n_1+\cdots+n_{s-1}=D}\binom{D}{n_0,n_1,\ldots, n_{s-1}}x_0^{n_0}x_1^{n_1}\cdots x_{s-1}^{n_{s-1}},$$
because it counts the number of distinct ways of collecting $n(0,\mathbf d)$ zeros, $n(1,\mathbf {d})$ ones, etc, from the terms in the product and these enumerate all the distinguishable permutations of $\mathbf{d}$. In the case of two dials these coefficients obviously are the Binomial coefficients, as discovered in the edits to the question. | The Defective Lock Problem
This problem, as I have interpreted it, has an easy one-line solution. This answer states my interpretation, outlines the background theory, presents the solution, gives a worked example, and appends |
55,843 | What is the difference between support vector machines and support vector regression? | a support vector machine performs classification
support vector regression performs regression
Related:
How does support vector regression work intuitively?
Support vector machines and regression | What is the difference between support vector machines and support vector regression? | a support vector machine performs classification
support vector regression performs regression
Related:
How does support vector regression work intuitively?
Support vector machines and regression | What is the difference between support vector machines and support vector regression?
a support vector machine performs classification
support vector regression performs regression
Related:
How does support vector regression work intuitively?
Support vector machines and regression | What is the difference between support vector machines and support vector regression?
a support vector machine performs classification
support vector regression performs regression
Related:
How does support vector regression work intuitively?
Support vector machines and regression |
55,844 | What is the difference between support vector machines and support vector regression? | I think this query is well addressed on the following link:
SVM vs. SVR | What is the difference between support vector machines and support vector regression? | I think this query is well addressed on the following link:
SVM vs. SVR | What is the difference between support vector machines and support vector regression?
I think this query is well addressed on the following link:
SVM vs. SVR | What is the difference between support vector machines and support vector regression?
I think this query is well addressed on the following link:
SVM vs. SVR |
55,845 | What does it mean to say that the t-distribution provides an "adjustment" or "estimate" for a normal distribution? | If the data are normally distributed, the test statistic with the sample standard deviation in the denominator will have a $t$ distribution with $n-1$ degrees of freedom. For large $n$, the $t$ distribution is approximately normal. But in small samples, it will be symmetric with heavier tails than those of the normal distribution. The fact that the tails are heavier than for the standard normal could be viewed as adjusting the normal in the tails. | What does it mean to say that the t-distribution provides an "adjustment" or "estimate" for a normal | If the data are normally distributed, the test statistic with the sample standard deviation in the denominator will have a $t$ distribution with $n-1$ degrees of freedom. For large $n$, the $t$ distr | What does it mean to say that the t-distribution provides an "adjustment" or "estimate" for a normal distribution?
If the data are normally distributed, the test statistic with the sample standard deviation in the denominator will have a $t$ distribution with $n-1$ degrees of freedom. For large $n$, the $t$ distribution is approximately normal. But in small samples, it will be symmetric with heavier tails than those of the normal distribution. The fact that the tails are heavier than for the standard normal could be viewed as adjusting the normal in the tails. | What does it mean to say that the t-distribution provides an "adjustment" or "estimate" for a normal
If the data are normally distributed, the test statistic with the sample standard deviation in the denominator will have a $t$ distribution with $n-1$ degrees of freedom. For large $n$, the $t$ distr |
55,846 | What does it mean to say that the t-distribution provides an "adjustment" or "estimate" for a normal distribution? | It is conventional to have the sample variance calculated to be an unbiased estimator of the population variance by dividing by $n-1$ rather than $n$.
But then the sample standard deviation is not an unbiased estimator of the population standard deviation, and the reciprocal of the sample standard deviation is even more biased as an estimator of the reciprocal of the population standard deviation. In particular, under-estimates of the variance will make the estimate of reciprocal of the standard deviation much much too big, and this is not fully offset by over-estimates of the variance.
Dividing the value of interest by the population standard deviation would lead to a normally distributed statistic.
But the $t$ statistic divides the value of interest by the estimate of the standard deviation from the sample (i.e. multiplies by its reciprocal). The possibility of this result being much more than it would have been using the population standard deviation helps drive the heavier tails of the $t$ distribution, and large distortions of this type are more likely with small samples. In a sense the $t$ distribution can be seen as an adjustment of the normal distribution to take this into account. | What does it mean to say that the t-distribution provides an "adjustment" or "estimate" for a normal | It is conventional to have the sample variance calculated to be an unbiased estimator of the population variance by dividing by $n-1$ rather than $n$.
But then the sample standard deviation is not a | What does it mean to say that the t-distribution provides an "adjustment" or "estimate" for a normal distribution?
It is conventional to have the sample variance calculated to be an unbiased estimator of the population variance by dividing by $n-1$ rather than $n$.
But then the sample standard deviation is not an unbiased estimator of the population standard deviation, and the reciprocal of the sample standard deviation is even more biased as an estimator of the reciprocal of the population standard deviation. In particular, under-estimates of the variance will make the estimate of reciprocal of the standard deviation much much too big, and this is not fully offset by over-estimates of the variance.
Dividing the value of interest by the population standard deviation would lead to a normally distributed statistic.
But the $t$ statistic divides the value of interest by the estimate of the standard deviation from the sample (i.e. multiplies by its reciprocal). The possibility of this result being much more than it would have been using the population standard deviation helps drive the heavier tails of the $t$ distribution, and large distortions of this type are more likely with small samples. In a sense the $t$ distribution can be seen as an adjustment of the normal distribution to take this into account. | What does it mean to say that the t-distribution provides an "adjustment" or "estimate" for a normal
It is conventional to have the sample variance calculated to be an unbiased estimator of the population variance by dividing by $n-1$ rather than $n$.
But then the sample standard deviation is not a |
55,847 | Does permutation permute also dependence? | The dependence structure is simply the values of $f(x_1,\cdots,x_n):=P(X_1,\cdots,X_n)$. So by permuting the vectors, you are also permuting the dependence. Nothing is lost (unless you can't keep track of your permutation indices). For example, if $Y=(Y_1,\cdots,Y_n)$ with $Y_i=X_{\pi(i)}$, where $\pi$ is your permutation, then $P(Y_1,\cdots,Y_n)=f(Y_{\pi^{-1}(1)},Y_{\pi^{-1}(2)},\cdots,Y_{\pi^{-1}(n)})=P(X_1,...,X_n)$. | Does permutation permute also dependence? | The dependence structure is simply the values of $f(x_1,\cdots,x_n):=P(X_1,\cdots,X_n)$. So by permuting the vectors, you are also permuting the dependence. Nothing is lost (unless you can't keep trac | Does permutation permute also dependence?
The dependence structure is simply the values of $f(x_1,\cdots,x_n):=P(X_1,\cdots,X_n)$. So by permuting the vectors, you are also permuting the dependence. Nothing is lost (unless you can't keep track of your permutation indices). For example, if $Y=(Y_1,\cdots,Y_n)$ with $Y_i=X_{\pi(i)}$, where $\pi$ is your permutation, then $P(Y_1,\cdots,Y_n)=f(Y_{\pi^{-1}(1)},Y_{\pi^{-1}(2)},\cdots,Y_{\pi^{-1}(n)})=P(X_1,...,X_n)$. | Does permutation permute also dependence?
The dependence structure is simply the values of $f(x_1,\cdots,x_n):=P(X_1,\cdots,X_n)$. So by permuting the vectors, you are also permuting the dependence. Nothing is lost (unless you can't keep trac |
55,848 | Convergence of Metropolis Hastings with time varying proposal density | What you are describing is adaptive MCMC if your proposal distribution depends on the history of the chain, and is thus time dependent. There is a lot of theory about ergodicity of adaptive mcmc. There are essentially two main conditions:
Diminishing Adaptation: The adaptation on the proposal distribution should diminish as a function of $t$.
Containment: The time to stationarity remains bounded in probability. This is more of a technical condition and often difficult to check.
You can find a variety of references on this:
These and these set of slides go into some detail about adaptive MCMC.
This is the paper that obtains the above conditions.
Here are some examples of adaptive MCMC. | Convergence of Metropolis Hastings with time varying proposal density | What you are describing is adaptive MCMC if your proposal distribution depends on the history of the chain, and is thus time dependent. There is a lot of theory about ergodicity of adaptive mcmc. Ther | Convergence of Metropolis Hastings with time varying proposal density
What you are describing is adaptive MCMC if your proposal distribution depends on the history of the chain, and is thus time dependent. There is a lot of theory about ergodicity of adaptive mcmc. There are essentially two main conditions:
Diminishing Adaptation: The adaptation on the proposal distribution should diminish as a function of $t$.
Containment: The time to stationarity remains bounded in probability. This is more of a technical condition and often difficult to check.
You can find a variety of references on this:
These and these set of slides go into some detail about adaptive MCMC.
This is the paper that obtains the above conditions.
Here are some examples of adaptive MCMC. | Convergence of Metropolis Hastings with time varying proposal density
What you are describing is adaptive MCMC if your proposal distribution depends on the history of the chain, and is thus time dependent. There is a lot of theory about ergodicity of adaptive mcmc. Ther |
55,849 | Are two Random Variables Independent if their support has a dependency? | I've convinced myself of the answer, so I'm answering my own question.
I've determined that if there is a dependency between $X$ and $Y$ in the support of a bivariate pdf, then $X$ and $Y$ cannot be independent. To be sure, there is a Lemma (4.2.7 in Casella and Berger's Statistical Inference, 2d) that states: Let ($X$,$Y$) be a bivariate random vector with joint pdf or pmf $f(x,y)$. Then $X$ and $Y$ are independent random variables if and only if there exists functions $g(x)$ and $h(y)$ such that for every $x$ $\in\mathbb{R}$ and $y$ $\in\mathbb{R}$:
$f(x,y)=g(x)h(y)$
If we incorporate the support (e.g. $0<y<x<1$) as an indicator function in the joint pdf (e.g. ${f_{XY}(x,y)=xy}I_{(0<y<x<1)}$ then the joint PDF cannot be written as a product of only $g(x)$ and only $h(y)$, so $X$ and $Y$ cannot be independent.) | Are two Random Variables Independent if their support has a dependency? | I've convinced myself of the answer, so I'm answering my own question.
I've determined that if there is a dependency between $X$ and $Y$ in the support of a bivariate pdf, then $X$ and $Y$ cannot be | Are two Random Variables Independent if their support has a dependency?
I've convinced myself of the answer, so I'm answering my own question.
I've determined that if there is a dependency between $X$ and $Y$ in the support of a bivariate pdf, then $X$ and $Y$ cannot be independent. To be sure, there is a Lemma (4.2.7 in Casella and Berger's Statistical Inference, 2d) that states: Let ($X$,$Y$) be a bivariate random vector with joint pdf or pmf $f(x,y)$. Then $X$ and $Y$ are independent random variables if and only if there exists functions $g(x)$ and $h(y)$ such that for every $x$ $\in\mathbb{R}$ and $y$ $\in\mathbb{R}$:
$f(x,y)=g(x)h(y)$
If we incorporate the support (e.g. $0<y<x<1$) as an indicator function in the joint pdf (e.g. ${f_{XY}(x,y)=xy}I_{(0<y<x<1)}$ then the joint PDF cannot be written as a product of only $g(x)$ and only $h(y)$, so $X$ and $Y$ cannot be independent.) | Are two Random Variables Independent if their support has a dependency?
I've convinced myself of the answer, so I'm answering my own question.
I've determined that if there is a dependency between $X$ and $Y$ in the support of a bivariate pdf, then $X$ and $Y$ cannot be |
55,850 | When you do PCA (or any dimensionality reduction), what is "the number of dimensions"? | (I cannot watch the video right now so this answer is to some extent a guess of what is meant)
First of all, yes we mostly talk about variates spanning the dimensions. However, it is also possible to take the opposite view (this is sometimes calles R-mode vs. Q-mode analysis).
Let me take a detour to cluster analysis to illustrate this:
Cluster analysis with variates = genes = dimensions will look for groups of cases that have similar gene expression patterns.
On the other hand, you can also take a "transposed" view and ask for groups of genes which are expressed similarly for the same cells. Genes and cells have changed their role compared to the first approach. For some types of data, you may get similar groupings both ways (see e.g. our paper on using this for spectroscopic data: A. Bonifacio, C. Beleites and V. Sergo: Application of R-mode analysis to Raman maps: a different way of looking at vibrational hyperspectral data, AnalBioanalChem, 407, 4 (2015) 1089–1095. DOI 10.1007/s00216-014-8321-7) whereas for other types of data both ways of looking at the data are interesting in themselves (e.g. for genetic data). In the latter case, you can use a heatmap giving both ways of clustering.
Now for PCA, the fun fact is that up to some decisions of standardization (row vs. columns for centering and possibly scaling) you'll arrive at the same solution both ways - just scores and loadings will change their role.
(see e.g. https://stats.stackexchange.com/a/147983/4598 and Why PCA of data by means of SVD of the data? for more details)
Is the number of dimension the number of cells or the number of genes?
IMHO this is rather ambiguious and as explained above depends on the view of the data you take (i.e. the question you ask/the application at hand).
For PCA, there's the additional ambiguity that "dimensions" is sometimes also used refering to the rank of the data matrix. The rank cannot be more than the smaller of number of rows and number of columns and it is also the maximum number of principal components for that data matrix and thus the number of dimensions of the resulting rotated coordinate system (before reducing dimensions by truncating this coordinate system).
In your example of 200 cells and 10⁴ genes the PC will at most span 200 dimensions, regardless of whether cells or genes were considered the variates by the mode of the data analysis. | When you do PCA (or any dimensionality reduction), what is "the number of dimensions"? | (I cannot watch the video right now so this answer is to some extent a guess of what is meant)
First of all, yes we mostly talk about variates spanning the dimensions. However, it is also possible to | When you do PCA (or any dimensionality reduction), what is "the number of dimensions"?
(I cannot watch the video right now so this answer is to some extent a guess of what is meant)
First of all, yes we mostly talk about variates spanning the dimensions. However, it is also possible to take the opposite view (this is sometimes calles R-mode vs. Q-mode analysis).
Let me take a detour to cluster analysis to illustrate this:
Cluster analysis with variates = genes = dimensions will look for groups of cases that have similar gene expression patterns.
On the other hand, you can also take a "transposed" view and ask for groups of genes which are expressed similarly for the same cells. Genes and cells have changed their role compared to the first approach. For some types of data, you may get similar groupings both ways (see e.g. our paper on using this for spectroscopic data: A. Bonifacio, C. Beleites and V. Sergo: Application of R-mode analysis to Raman maps: a different way of looking at vibrational hyperspectral data, AnalBioanalChem, 407, 4 (2015) 1089–1095. DOI 10.1007/s00216-014-8321-7) whereas for other types of data both ways of looking at the data are interesting in themselves (e.g. for genetic data). In the latter case, you can use a heatmap giving both ways of clustering.
Now for PCA, the fun fact is that up to some decisions of standardization (row vs. columns for centering and possibly scaling) you'll arrive at the same solution both ways - just scores and loadings will change their role.
(see e.g. https://stats.stackexchange.com/a/147983/4598 and Why PCA of data by means of SVD of the data? for more details)
Is the number of dimension the number of cells or the number of genes?
IMHO this is rather ambiguious and as explained above depends on the view of the data you take (i.e. the question you ask/the application at hand).
For PCA, there's the additional ambiguity that "dimensions" is sometimes also used refering to the rank of the data matrix. The rank cannot be more than the smaller of number of rows and number of columns and it is also the maximum number of principal components for that data matrix and thus the number of dimensions of the resulting rotated coordinate system (before reducing dimensions by truncating this coordinate system).
In your example of 200 cells and 10⁴ genes the PC will at most span 200 dimensions, regardless of whether cells or genes were considered the variates by the mode of the data analysis. | When you do PCA (or any dimensionality reduction), what is "the number of dimensions"?
(I cannot watch the video right now so this answer is to some extent a guess of what is meant)
First of all, yes we mostly talk about variates spanning the dimensions. However, it is also possible to |
55,851 | When you do PCA (or any dimensionality reduction), what is "the number of dimensions"? | An extensive discussion is already provided here in the answer by cbeleites, and under similar questions (PCA and Correspondence analysis in their relation to Biplot), so I'll just comment briefly on the specific video.
As the narrator never mentions "scores" or "loadings" explicitly throughout the video, and the term "dimensions" in PCA is already ambiguous, technically there were no mistakes. However, I agree with you that his presentation is confusing: the first part states that the dimensionality along cells is to be reduced (200 cells -> 2 PCs), and the second part actually focuses on reducing dimensionality along genes (10,000 genes -> 2 PCs). I'd say there's many better and still accessible introductions to PCA, with consistent presentation and actual terminology E.g.: http://webspace.ship.edu/pgmarr/Geo441/Lectures/Lec%2017%20-%20Principal%20Component%20Analysis.pdf . | When you do PCA (or any dimensionality reduction), what is "the number of dimensions"? | An extensive discussion is already provided here in the answer by cbeleites, and under similar questions (PCA and Correspondence analysis in their relation to Biplot), so I'll just comment briefly on | When you do PCA (or any dimensionality reduction), what is "the number of dimensions"?
An extensive discussion is already provided here in the answer by cbeleites, and under similar questions (PCA and Correspondence analysis in their relation to Biplot), so I'll just comment briefly on the specific video.
As the narrator never mentions "scores" or "loadings" explicitly throughout the video, and the term "dimensions" in PCA is already ambiguous, technically there were no mistakes. However, I agree with you that his presentation is confusing: the first part states that the dimensionality along cells is to be reduced (200 cells -> 2 PCs), and the second part actually focuses on reducing dimensionality along genes (10,000 genes -> 2 PCs). I'd say there's many better and still accessible introductions to PCA, with consistent presentation and actual terminology E.g.: http://webspace.ship.edu/pgmarr/Geo441/Lectures/Lec%2017%20-%20Principal%20Component%20Analysis.pdf . | When you do PCA (or any dimensionality reduction), what is "the number of dimensions"?
An extensive discussion is already provided here in the answer by cbeleites, and under similar questions (PCA and Correspondence analysis in their relation to Biplot), so I'll just comment briefly on |
55,852 | When you do PCA (or any dimensionality reduction), what is "the number of dimensions"? | The definition of a vector space is quite general and there are numerous ways to represent data as vectors in a vector space.
From my cursory examination, they may be doing the following?
Let $i = 1, \ldots, m$ index the gene.
Let $j = 1, \ldots, n$ index the cell.
Let $x_{i,j}$ denote the gene expression level of gene $i$ in cell $j$.
We then have a matrix of data $X$. You can run PCA on either $X$ or the transpose $X^T$.
Treat columns of X as vectors (what you're naturally thinking)
You're thinking we can construct a vector for each cell $j$ as:
$$ \mathbf{y}_j = \begin{bmatrix} x_{1,j} \\ x_{2,j} \\ \ldots \\ x_{m,j} \end{bmatrix}$$
That is, each vector $\mathbf{y}_j$ shows the gene expression levels of cell $j$ (and each index is a different gene).
Treat rows of X as vectors (what they appear to be doing in Image 3 and 4?)
We could also form vectors using the columns of $X$
$$ \mathbf{z}_i = \begin{bmatrix} x_{i, 1} \\ x_{i, 2} \\ \dots \\ x_{i, n} \end{bmatrix} $$
That is, each vector $\mathbf{z}_i$ shows the gene expression levels of gene $i$ (and each cell is a different index of the vector).
Onward to PCA
Once you have a bunch of vectors, you can always conduct PCA to find an alternative basis for that space.
A subject area specific interpretation of that basis of course will depend on what your various vectors represent. | When you do PCA (or any dimensionality reduction), what is "the number of dimensions"? | The definition of a vector space is quite general and there are numerous ways to represent data as vectors in a vector space.
From my cursory examination, they may be doing the following?
Let $i = 1, | When you do PCA (or any dimensionality reduction), what is "the number of dimensions"?
The definition of a vector space is quite general and there are numerous ways to represent data as vectors in a vector space.
From my cursory examination, they may be doing the following?
Let $i = 1, \ldots, m$ index the gene.
Let $j = 1, \ldots, n$ index the cell.
Let $x_{i,j}$ denote the gene expression level of gene $i$ in cell $j$.
We then have a matrix of data $X$. You can run PCA on either $X$ or the transpose $X^T$.
Treat columns of X as vectors (what you're naturally thinking)
You're thinking we can construct a vector for each cell $j$ as:
$$ \mathbf{y}_j = \begin{bmatrix} x_{1,j} \\ x_{2,j} \\ \ldots \\ x_{m,j} \end{bmatrix}$$
That is, each vector $\mathbf{y}_j$ shows the gene expression levels of cell $j$ (and each index is a different gene).
Treat rows of X as vectors (what they appear to be doing in Image 3 and 4?)
We could also form vectors using the columns of $X$
$$ \mathbf{z}_i = \begin{bmatrix} x_{i, 1} \\ x_{i, 2} \\ \dots \\ x_{i, n} \end{bmatrix} $$
That is, each vector $\mathbf{z}_i$ shows the gene expression levels of gene $i$ (and each cell is a different index of the vector).
Onward to PCA
Once you have a bunch of vectors, you can always conduct PCA to find an alternative basis for that space.
A subject area specific interpretation of that basis of course will depend on what your various vectors represent. | When you do PCA (or any dimensionality reduction), what is "the number of dimensions"?
The definition of a vector space is quite general and there are numerous ways to represent data as vectors in a vector space.
From my cursory examination, they may be doing the following?
Let $i = 1, |
55,853 | When you do PCA (or any dimensionality reduction), what is "the number of dimensions"? | I don't like these videos. They only make understanding PCA more difficult by bringing in irrelevant details. Also, they're long and wordy.
The idea of PCA is very simple when it comes to applications. You have a several series of data, call them variables. Say you have N variables (series) $x_1(t),x_2(t),\dots,x_N(t)$.
Sometimes there are a few underlying factors that drive all these series. Let's say there are M factors $f_1(t),f_2(t),\dots,f_M(t)$, and you suspect (or know for sure) that they drive the variables:
$$x_1(t)=c_{11}f_1(t)+\dots+c_{1M}f_M(t)\\\dots\\
x_N(t)=c_{N1}f_1(t)+\dots+c_{NM}f_M(t)$$
So, you're interested to extract the factor values $f_j(t)$ and the coefficients $c_{ij}$. So, PCA is one way of accomplishing this. In fact, if you don't know what is the exact number of components M, it can help you find that out too. | When you do PCA (or any dimensionality reduction), what is "the number of dimensions"? | I don't like these videos. They only make understanding PCA more difficult by bringing in irrelevant details. Also, they're long and wordy.
The idea of PCA is very simple when it comes to applications | When you do PCA (or any dimensionality reduction), what is "the number of dimensions"?
I don't like these videos. They only make understanding PCA more difficult by bringing in irrelevant details. Also, they're long and wordy.
The idea of PCA is very simple when it comes to applications. You have a several series of data, call them variables. Say you have N variables (series) $x_1(t),x_2(t),\dots,x_N(t)$.
Sometimes there are a few underlying factors that drive all these series. Let's say there are M factors $f_1(t),f_2(t),\dots,f_M(t)$, and you suspect (or know for sure) that they drive the variables:
$$x_1(t)=c_{11}f_1(t)+\dots+c_{1M}f_M(t)\\\dots\\
x_N(t)=c_{N1}f_1(t)+\dots+c_{NM}f_M(t)$$
So, you're interested to extract the factor values $f_j(t)$ and the coefficients $c_{ij}$. So, PCA is one way of accomplishing this. In fact, if you don't know what is the exact number of components M, it can help you find that out too. | When you do PCA (or any dimensionality reduction), what is "the number of dimensions"?
I don't like these videos. They only make understanding PCA more difficult by bringing in irrelevant details. Also, they're long and wordy.
The idea of PCA is very simple when it comes to applications |
55,854 | Derivation of Gini Impurity Formula | I remember reading this exact thing on Wikipedia thinking it was a typo. It's not though. And the math is really simple. Note that $f_if_k$ corresponds to the probability of observing an $i$ followed by a $k$ from two independent draws from the distribution $f$. Therefore, if you sum over the probabilities of all $(i,k)$ pairs you get $1$. In other words, we have the equality,
$$\sum_{i=1}^J \sum_{k=1}^J f_i f_k = 1$$
But we can rewrite the double summation as
$$\sum_{k=1}^J f_i f_k = \sum_{i=1}^J f_i^2 + \sum_{i=1}^J \sum_{k=1, k \ne i}^J f_i f_k$$
Then, if you subtract $\sum_{i=1}^J f_i^2$ from the top and bottom, you end up with the equality of interest. | Derivation of Gini Impurity Formula | I remember reading this exact thing on Wikipedia thinking it was a typo. It's not though. And the math is really simple. Note that $f_if_k$ corresponds to the probability of observing an $i$ followed | Derivation of Gini Impurity Formula
I remember reading this exact thing on Wikipedia thinking it was a typo. It's not though. And the math is really simple. Note that $f_if_k$ corresponds to the probability of observing an $i$ followed by a $k$ from two independent draws from the distribution $f$. Therefore, if you sum over the probabilities of all $(i,k)$ pairs you get $1$. In other words, we have the equality,
$$\sum_{i=1}^J \sum_{k=1}^J f_i f_k = 1$$
But we can rewrite the double summation as
$$\sum_{k=1}^J f_i f_k = \sum_{i=1}^J f_i^2 + \sum_{i=1}^J \sum_{k=1, k \ne i}^J f_i f_k$$
Then, if you subtract $\sum_{i=1}^J f_i^2$ from the top and bottom, you end up with the equality of interest. | Derivation of Gini Impurity Formula
I remember reading this exact thing on Wikipedia thinking it was a typo. It's not though. And the math is really simple. Note that $f_if_k$ corresponds to the probability of observing an $i$ followed |
55,855 | Derivation of Gini Impurity Formula | Here is a snippet from my answer here. The easiest way (for me at least) to understand
$1-\sum f_i^2$ = $\sum_{i \neq k} f_if_k$
is by visually representing each of the elements in this equation. We'll assume that there are 4 labels below; however, this will scale to n values.
The value 1 is simply the sum of all possible probabilities. By definition this value must be 1.
The value $\sum f_i^2$ is the sum of probabilities of selecting a value and its label from the distribution of values.
Subtracting the probability that you match labels with values from 1 gives you the probability that you don't match labels and values. This is what the gini impurity provides -- the probability that you don't match labels to values. | Derivation of Gini Impurity Formula | Here is a snippet from my answer here. The easiest way (for me at least) to understand
$1-\sum f_i^2$ = $\sum_{i \neq k} f_if_k$
is by visually representing each of the elements in this equation. W | Derivation of Gini Impurity Formula
Here is a snippet from my answer here. The easiest way (for me at least) to understand
$1-\sum f_i^2$ = $\sum_{i \neq k} f_if_k$
is by visually representing each of the elements in this equation. We'll assume that there are 4 labels below; however, this will scale to n values.
The value 1 is simply the sum of all possible probabilities. By definition this value must be 1.
The value $\sum f_i^2$ is the sum of probabilities of selecting a value and its label from the distribution of values.
Subtracting the probability that you match labels with values from 1 gives you the probability that you don't match labels and values. This is what the gini impurity provides -- the probability that you don't match labels to values. | Derivation of Gini Impurity Formula
Here is a snippet from my answer here. The easiest way (for me at least) to understand
$1-\sum f_i^2$ = $\sum_{i \neq k} f_if_k$
is by visually representing each of the elements in this equation. W |
55,856 | What are shortcomings of PCA as a dimensionality reduction technique compared to t-SNE? | It all depends on how you understand "similarity" and what the goal of your transformation into the low-dimensional representation is.
PCA does not attempt to group "similar" points, whatever this "similarity" may be. PCA is a method of constructing a particular linear transformation which results in new coordinates of the samples with very well defined properties (such as orthogonality between the different components). The fact that "similar" points group together is, one could say, a byproduct. Or, rather, the fact that very often, "similar" points (say, samples from the same experimental group) cluster in the first components is due to
groups being major contributors to the overall variance (and first components catch the major part of the total variance)
differences between groups being very often linear in respect to one variable or another
t-SNE is an algorithm designed with a different goal in mind -- the ability to group "similar" data points even in a context of lack of linearity. The similarity is defined in a very particular way (consult the Wikipedia for details). This definition of similarity is not exactly a common one, and stresses local similarity and local density.
However, while t-SNE is very good at tackling the particular goal of clustering close samples, it has a major disadvantage compared to PCA: it gives you a low-dimensional representation of your data, but it does not give you a transformation. In other words, you cannot
interpret the dimensions in a similar way you interpret the loadings in a PCA
apply the transformation to a new data set
It might be, therefore, useful to explore multidimensional data, but it is not very much so in more general tasks such as machine learning and interpretation of the ML models. | What are shortcomings of PCA as a dimensionality reduction technique compared to t-SNE? | It all depends on how you understand "similarity" and what the goal of your transformation into the low-dimensional representation is.
PCA does not attempt to group "similar" points, whatever this "s | What are shortcomings of PCA as a dimensionality reduction technique compared to t-SNE?
It all depends on how you understand "similarity" and what the goal of your transformation into the low-dimensional representation is.
PCA does not attempt to group "similar" points, whatever this "similarity" may be. PCA is a method of constructing a particular linear transformation which results in new coordinates of the samples with very well defined properties (such as orthogonality between the different components). The fact that "similar" points group together is, one could say, a byproduct. Or, rather, the fact that very often, "similar" points (say, samples from the same experimental group) cluster in the first components is due to
groups being major contributors to the overall variance (and first components catch the major part of the total variance)
differences between groups being very often linear in respect to one variable or another
t-SNE is an algorithm designed with a different goal in mind -- the ability to group "similar" data points even in a context of lack of linearity. The similarity is defined in a very particular way (consult the Wikipedia for details). This definition of similarity is not exactly a common one, and stresses local similarity and local density.
However, while t-SNE is very good at tackling the particular goal of clustering close samples, it has a major disadvantage compared to PCA: it gives you a low-dimensional representation of your data, but it does not give you a transformation. In other words, you cannot
interpret the dimensions in a similar way you interpret the loadings in a PCA
apply the transformation to a new data set
It might be, therefore, useful to explore multidimensional data, but it is not very much so in more general tasks such as machine learning and interpretation of the ML models. | What are shortcomings of PCA as a dimensionality reduction technique compared to t-SNE?
It all depends on how you understand "similarity" and what the goal of your transformation into the low-dimensional representation is.
PCA does not attempt to group "similar" points, whatever this "s |
55,857 | rpart, Cross Validation. [closed] | The rpart package's plotcp function plots the Complexity Parameter Table for an rpart tree fit on the training dataset. You don't need to supply any additional validation datasets when using the plotcp function.
The Rpart implementation first fits a fully grown tree on the entire data $D$ with $T$ terminal nodes. After this step, the tree is pruned to the smallest tree with lowest miss-classification loss. This is how it works:
The data is then split into $n$ (default = 10) randomly selected folds: $F_1$ to $F_{10}$
It then uses 10-fold cross-validation and fits each sub-tree $T_1 ... T_m $ on each training fold $D_s$.
The corresponding miss-classification loss (risk) $R_m$ for each sub-tree is then calculated by comparing the class predicted for the validation fold vs. actual class; and this risk value for each sub-tree is summed up for all folds.
The complexity parameter $\beta$ giving the lowest total risk over the whole dataset is finally selected.
The full data is then fit using this complexity parameter and this tree is selected as the best trimmed tree.
Hence, when you use plotcp, it plots the relative cross-validation error for each sub-tree from smallest to largest to let you compare the risk for each complexity parameter $\beta$.
For example, refer to the following fit using the StageC cancer prognosis data-set in the rpart package:
> printcp( cfit)
Classification tree:
rpart(formula = progstat ~ age + eet + g2 + grade + gleason +
ploidy, data = stagec, method = "class")
Variables actually used in tree construction:
[1] age g2 grade ploidy
Root node error: 54/146 = 0.36986
n= 146
CP nsplit rel error xerror xstd
1 0.104938 0 1.00000 1.00000 0.10802
2 0.055556 3 0.68519 1.16667 0.11083
3 0.027778 4 0.62963 0.96296 0.10715
4 0.018519 6 0.57407 0.96296 0.10715
5 0.010000 7 0.55556 1.00000 0.10802
The cross-validated error is scaled down for easier reading; the error bars on the plot show one standard deviation of the x-validated error.
References:
An Introduction to Recursive Partitioning Using the RPART Routines, URL: https://cran.r-project.org/web/packages/rpart/vignettes/longintro.pdf | rpart, Cross Validation. [closed] | The rpart package's plotcp function plots the Complexity Parameter Table for an rpart tree fit on the training dataset. You don't need to supply any additional validation datasets when using the plotc | rpart, Cross Validation. [closed]
The rpart package's plotcp function plots the Complexity Parameter Table for an rpart tree fit on the training dataset. You don't need to supply any additional validation datasets when using the plotcp function.
The Rpart implementation first fits a fully grown tree on the entire data $D$ with $T$ terminal nodes. After this step, the tree is pruned to the smallest tree with lowest miss-classification loss. This is how it works:
The data is then split into $n$ (default = 10) randomly selected folds: $F_1$ to $F_{10}$
It then uses 10-fold cross-validation and fits each sub-tree $T_1 ... T_m $ on each training fold $D_s$.
The corresponding miss-classification loss (risk) $R_m$ for each sub-tree is then calculated by comparing the class predicted for the validation fold vs. actual class; and this risk value for each sub-tree is summed up for all folds.
The complexity parameter $\beta$ giving the lowest total risk over the whole dataset is finally selected.
The full data is then fit using this complexity parameter and this tree is selected as the best trimmed tree.
Hence, when you use plotcp, it plots the relative cross-validation error for each sub-tree from smallest to largest to let you compare the risk for each complexity parameter $\beta$.
For example, refer to the following fit using the StageC cancer prognosis data-set in the rpart package:
> printcp( cfit)
Classification tree:
rpart(formula = progstat ~ age + eet + g2 + grade + gleason +
ploidy, data = stagec, method = "class")
Variables actually used in tree construction:
[1] age g2 grade ploidy
Root node error: 54/146 = 0.36986
n= 146
CP nsplit rel error xerror xstd
1 0.104938 0 1.00000 1.00000 0.10802
2 0.055556 3 0.68519 1.16667 0.11083
3 0.027778 4 0.62963 0.96296 0.10715
4 0.018519 6 0.57407 0.96296 0.10715
5 0.010000 7 0.55556 1.00000 0.10802
The cross-validated error is scaled down for easier reading; the error bars on the plot show one standard deviation of the x-validated error.
References:
An Introduction to Recursive Partitioning Using the RPART Routines, URL: https://cran.r-project.org/web/packages/rpart/vignettes/longintro.pdf | rpart, Cross Validation. [closed]
The rpart package's plotcp function plots the Complexity Parameter Table for an rpart tree fit on the training dataset. You don't need to supply any additional validation datasets when using the plotc |
55,858 | Combining uncertain measurements | If I'm understanding your question properly, this sounds like you need Inverse variance weighting.
https://en.wikipedia.org/wiki/Inverse-variance_weighting
The estimate of your $x'$ that would minimize the variance (so giving you the "best guess") will be given by
\begin{equation}
\hat{x} = \frac{\Sigma_ix_i/\sigma^2_{x,i}}{\Sigma_i1/\sigma^2_{x,i}}
\end{equation}
You stated that the uncertainties in your measurements were "iid". If they have different variances, then they are not identical, just independent.
For Inverse Variance Weighting to work, they only need to be independent. | Combining uncertain measurements | If I'm understanding your question properly, this sounds like you need Inverse variance weighting.
https://en.wikipedia.org/wiki/Inverse-variance_weighting
The estimate of your $x'$ that would minimiz | Combining uncertain measurements
If I'm understanding your question properly, this sounds like you need Inverse variance weighting.
https://en.wikipedia.org/wiki/Inverse-variance_weighting
The estimate of your $x'$ that would minimize the variance (so giving you the "best guess") will be given by
\begin{equation}
\hat{x} = \frac{\Sigma_ix_i/\sigma^2_{x,i}}{\Sigma_i1/\sigma^2_{x,i}}
\end{equation}
You stated that the uncertainties in your measurements were "iid". If they have different variances, then they are not identical, just independent.
For Inverse Variance Weighting to work, they only need to be independent. | Combining uncertain measurements
If I'm understanding your question properly, this sounds like you need Inverse variance weighting.
https://en.wikipedia.org/wiki/Inverse-variance_weighting
The estimate of your $x'$ that would minimiz |
55,859 | Combining uncertain measurements | And the inverse square of the error on the combined value is the sum of the inverse squares of the individual errors:
$$ \frac{1}{\sigma^2} = \sum_i \frac{1}{\sigma_{x,i}^2}$$
For a derivation, see the section on statistical methods of any experimental physics handbook.
(The fact that you have each measurement has an x and y value doesn't add any complexity; only the x values contribute to the combined x value and only the y values contribute to the combined y value.) | Combining uncertain measurements | And the inverse square of the error on the combined value is the sum of the inverse squares of the individual errors:
$$ \frac{1}{\sigma^2} = \sum_i \frac{1}{\sigma_{x,i}^2}$$
For a derivation, see th | Combining uncertain measurements
And the inverse square of the error on the combined value is the sum of the inverse squares of the individual errors:
$$ \frac{1}{\sigma^2} = \sum_i \frac{1}{\sigma_{x,i}^2}$$
For a derivation, see the section on statistical methods of any experimental physics handbook.
(The fact that you have each measurement has an x and y value doesn't add any complexity; only the x values contribute to the combined x value and only the y values contribute to the combined y value.) | Combining uncertain measurements
And the inverse square of the error on the combined value is the sum of the inverse squares of the individual errors:
$$ \frac{1}{\sigma^2} = \sum_i \frac{1}{\sigma_{x,i}^2}$$
For a derivation, see th |
55,860 | How to interpret the clusplot in R | The clusplot uses PCA to draw the data. It uses the first two principal components to explain the data.
You can read more about it here Making sense of principal component analysis, eigenvectors & eigenvalues.
Principal components are the (orthogonal) axes that along them the data has the most variability, if your data is 2d then using two principal components can explain the whole variability of the data, thus the reason you see 100% explained. If your data is from a higher dimension but has a lot of correlations you can use a lower dimensional space to explain it.
The difference you see between the graphs is plotting them along the PCA components.
x <- mvrnorm(200, c(-2,2), diag(2))
y <- mvrnorm(200, c(2,-2), diag(2))
tot <- rbind(x,y)
par(mfrow = c(2,2))
## Original
plot(x, ylim = c(-5,5), xlim = c(-5,5), col = 'green', main = 'Original Data')
points(y, col = 'red')
## Cusplot
kmean.fit <- kmeans(tot, 2)
clusplot(tot, kmean.fit$cluster, main = 'Cusplot')
## PCA plot
pca.tot <- princomp(tot)
plot(tot[1:200,] %*% pca.tot$loading, ylim = c(-3,3), xlim = c(-6,6), col = 'red', main = 'PCA')
points(tot[201:400, ]%*% pca.tot$loading ,col = 'green')
You can play around, add dimensions and change the correlation structure and see how you get different results. | How to interpret the clusplot in R | The clusplot uses PCA to draw the data. It uses the first two principal components to explain the data.
You can read more about it here Making sense of principal component analysis, eigenvectors & e | How to interpret the clusplot in R
The clusplot uses PCA to draw the data. It uses the first two principal components to explain the data.
You can read more about it here Making sense of principal component analysis, eigenvectors & eigenvalues.
Principal components are the (orthogonal) axes that along them the data has the most variability, if your data is 2d then using two principal components can explain the whole variability of the data, thus the reason you see 100% explained. If your data is from a higher dimension but has a lot of correlations you can use a lower dimensional space to explain it.
The difference you see between the graphs is plotting them along the PCA components.
x <- mvrnorm(200, c(-2,2), diag(2))
y <- mvrnorm(200, c(2,-2), diag(2))
tot <- rbind(x,y)
par(mfrow = c(2,2))
## Original
plot(x, ylim = c(-5,5), xlim = c(-5,5), col = 'green', main = 'Original Data')
points(y, col = 'red')
## Cusplot
kmean.fit <- kmeans(tot, 2)
clusplot(tot, kmean.fit$cluster, main = 'Cusplot')
## PCA plot
pca.tot <- princomp(tot)
plot(tot[1:200,] %*% pca.tot$loading, ylim = c(-3,3), xlim = c(-6,6), col = 'red', main = 'PCA')
points(tot[201:400, ]%*% pca.tot$loading ,col = 'green')
You can play around, add dimensions and change the correlation structure and see how you get different results. | How to interpret the clusplot in R
The clusplot uses PCA to draw the data. It uses the first two principal components to explain the data.
You can read more about it here Making sense of principal component analysis, eigenvectors & e |
55,861 | In layman's terms, why is Naive Bayes the dominant algorithm used for text-classification? | I'm taking your word for Naive Bayes' popularity here as language processing isn't my specialty:
One reason NB is useful is the bias–variance tradeoff. Spam/sentiment type data are often noisy and usually high-dimensional (more predictors than samples, $n \ll p$). The naive assumption that predictors are independent of one another is a strong, high-bias, one.
By assuming independence of predictors we're saying that covariance matrix of our model only has non-zero entries on the diagonal. Since estimating covariance structure in $n \ll p$ situations is very hard indeed we are usually forced to put some constraints on the problem. The independence assumption is a particularly strong constraint that yields a highly interpretable model. The introduced bias may sufficiently reduce variance that you get better predictions. | In layman's terms, why is Naive Bayes the dominant algorithm used for text-classification? | I'm taking your word for Naive Bayes' popularity here as language processing isn't my specialty:
One reason NB is useful is the bias–variance tradeoff. Spam/sentiment type data are often noisy and us | In layman's terms, why is Naive Bayes the dominant algorithm used for text-classification?
I'm taking your word for Naive Bayes' popularity here as language processing isn't my specialty:
One reason NB is useful is the bias–variance tradeoff. Spam/sentiment type data are often noisy and usually high-dimensional (more predictors than samples, $n \ll p$). The naive assumption that predictors are independent of one another is a strong, high-bias, one.
By assuming independence of predictors we're saying that covariance matrix of our model only has non-zero entries on the diagonal. Since estimating covariance structure in $n \ll p$ situations is very hard indeed we are usually forced to put some constraints on the problem. The independence assumption is a particularly strong constraint that yields a highly interpretable model. The introduced bias may sufficiently reduce variance that you get better predictions. | In layman's terms, why is Naive Bayes the dominant algorithm used for text-classification?
I'm taking your word for Naive Bayes' popularity here as language processing isn't my specialty:
One reason NB is useful is the bias–variance tradeoff. Spam/sentiment type data are often noisy and us |
55,862 | Are there any implementations/examples of hierarchical classifiers? | I couldn't find an implementation of Hierarchical Classification on scikit-learn official documentation. But I found this repository recently. This module is based on scikit-learn's interfaces and conventions. I hope this will be useful.
https://github.com/globality-corp/sklearn-hierarchical-classification
It's possible to install it with pip:
pip install sklearn-hierarchical-classification
A thorough usage example is provided in the repo . | Are there any implementations/examples of hierarchical classifiers? | I couldn't find an implementation of Hierarchical Classification on scikit-learn official documentation. But I found this repository recently. This module is based on scikit-learn's interfaces and con | Are there any implementations/examples of hierarchical classifiers?
I couldn't find an implementation of Hierarchical Classification on scikit-learn official documentation. But I found this repository recently. This module is based on scikit-learn's interfaces and conventions. I hope this will be useful.
https://github.com/globality-corp/sklearn-hierarchical-classification
It's possible to install it with pip:
pip install sklearn-hierarchical-classification
A thorough usage example is provided in the repo . | Are there any implementations/examples of hierarchical classifiers?
I couldn't find an implementation of Hierarchical Classification on scikit-learn official documentation. But I found this repository recently. This module is based on scikit-learn's interfaces and con |
55,863 | Are there any implementations/examples of hierarchical classifiers? | If anyone stumbles across this, check out the package I developed to handle this type of data.
Here's the tutorial investigating antibiotic resistance
The peer-reviewed publication is Espinoza-Dupont et al. 2021 | Are there any implementations/examples of hierarchical classifiers? | If anyone stumbles across this, check out the package I developed to handle this type of data.
Here's the tutorial investigating antibiotic resistance
The peer-reviewed publication is Espinoza-Dupont | Are there any implementations/examples of hierarchical classifiers?
If anyone stumbles across this, check out the package I developed to handle this type of data.
Here's the tutorial investigating antibiotic resistance
The peer-reviewed publication is Espinoza-Dupont et al. 2021 | Are there any implementations/examples of hierarchical classifiers?
If anyone stumbles across this, check out the package I developed to handle this type of data.
Here's the tutorial investigating antibiotic resistance
The peer-reviewed publication is Espinoza-Dupont |
55,864 | Are there any implementations/examples of hierarchical classifiers? | We just released a new library compatible with scikit-learn to create local hierarchical classifiers.
It can be easily installed with pip install hiclass or conda install hiclass. Documentation and examples can be found at https://gitlab.com/dacs-hpi/hiclass.
I hope this will be useful for future readers. :) | Are there any implementations/examples of hierarchical classifiers? | We just released a new library compatible with scikit-learn to create local hierarchical classifiers.
It can be easily installed with pip install hiclass or conda install hiclass. Documentation and ex | Are there any implementations/examples of hierarchical classifiers?
We just released a new library compatible with scikit-learn to create local hierarchical classifiers.
It can be easily installed with pip install hiclass or conda install hiclass. Documentation and examples can be found at https://gitlab.com/dacs-hpi/hiclass.
I hope this will be useful for future readers. :) | Are there any implementations/examples of hierarchical classifiers?
We just released a new library compatible with scikit-learn to create local hierarchical classifiers.
It can be easily installed with pip install hiclass or conda install hiclass. Documentation and ex |
55,865 | What is a hypothesis class in SVM? | In classification in general, the hypothesis class is the set of possible classification functions you're considering; the learning algorithm picks a function from the hypothesis class.
For a decision tree learner, the hypothesis class would just be the set of all possible decision trees.
For a primal SVM, this is the set of functions
$$\mathsf H_d =\left\{ f(x) = \operatorname{sign}\left( w^T x + b \right) \mid w \in \mathbb R^d, b \in \mathbb R \right\}.$$
The SVM learning process involves choosing a $w$ and $b$, i.e. choosing a function from this class.
For a kernelized SVM, we have some feature function $\varphi : \mathcal X \to \mathcal H$ corresponding to the kernel by $k(x, y) = \langle \varphi(x), \varphi(y) \rangle_{\mathcal H}$; here the hypothesis class becomes
$$\mathsf H_k = \{ f(x) = \operatorname{sign}\left( \langle w, \varphi(x) \rangle_{\mathcal H} + b \right) \mid w \in \mathcal H, b \in \mathbb R \}.$$
Now, since $\mathcal H$ is often infinite-dimensional, we don't want to explicitly represent a $w \in \mathcal H$. But the representer theorem tells us that the $w$ which optimizes our SVM loss for a given training set $X = \{ x_i \}_{i=1}^n$ will be of the form $w = \sum_{i=1}^n \alpha_i \varphi(x_i)$. Noting that $$
\langle w, \varphi(x) \rangle_{\mathcal H}
= \left\langle \sum_{i=1}^n \alpha_i \varphi(x_i), \varphi(x) \right\rangle_{\mathcal H}
= \sum_{i=1}^n \alpha_i \left\langle \varphi(x_i), \varphi(x) \right\rangle_{\mathcal H}
= \sum_{i=1}^n \alpha_i k(x_i, x),$$
we can thus consider only the restricted set of functions
$$\mathsf H_k^X = \left\{ f(x) = \operatorname{sign}\left( \sum_{i=1}^n \alpha_i k(x_i, x) + b \right) \mid \alpha \in \mathbb R^n, b \in \mathbb R \right\}.$$
Note that $\mathsf H_k^X \subset \mathsf H_k$, but we know that the hypothesis the SVM algorithm would pick from $\mathsf H_k$ is in $\mathsf H_k^X$, so that's okay.
The support vectors specifically are the points with $\alpha_i \ne 0$. Which points are support vectors or not depends on the regularization constant and so on, so I wouldn't necessarily say that they're integrally related to the hypothesis class; but the set of possible support vectors, i.e. the training set $X$, of course defines $\mathsf H_k^X$ (along with the kernel $k$). | What is a hypothesis class in SVM? | In classification in general, the hypothesis class is the set of possible classification functions you're considering; the learning algorithm picks a function from the hypothesis class.
For a decision | What is a hypothesis class in SVM?
In classification in general, the hypothesis class is the set of possible classification functions you're considering; the learning algorithm picks a function from the hypothesis class.
For a decision tree learner, the hypothesis class would just be the set of all possible decision trees.
For a primal SVM, this is the set of functions
$$\mathsf H_d =\left\{ f(x) = \operatorname{sign}\left( w^T x + b \right) \mid w \in \mathbb R^d, b \in \mathbb R \right\}.$$
The SVM learning process involves choosing a $w$ and $b$, i.e. choosing a function from this class.
For a kernelized SVM, we have some feature function $\varphi : \mathcal X \to \mathcal H$ corresponding to the kernel by $k(x, y) = \langle \varphi(x), \varphi(y) \rangle_{\mathcal H}$; here the hypothesis class becomes
$$\mathsf H_k = \{ f(x) = \operatorname{sign}\left( \langle w, \varphi(x) \rangle_{\mathcal H} + b \right) \mid w \in \mathcal H, b \in \mathbb R \}.$$
Now, since $\mathcal H$ is often infinite-dimensional, we don't want to explicitly represent a $w \in \mathcal H$. But the representer theorem tells us that the $w$ which optimizes our SVM loss for a given training set $X = \{ x_i \}_{i=1}^n$ will be of the form $w = \sum_{i=1}^n \alpha_i \varphi(x_i)$. Noting that $$
\langle w, \varphi(x) \rangle_{\mathcal H}
= \left\langle \sum_{i=1}^n \alpha_i \varphi(x_i), \varphi(x) \right\rangle_{\mathcal H}
= \sum_{i=1}^n \alpha_i \left\langle \varphi(x_i), \varphi(x) \right\rangle_{\mathcal H}
= \sum_{i=1}^n \alpha_i k(x_i, x),$$
we can thus consider only the restricted set of functions
$$\mathsf H_k^X = \left\{ f(x) = \operatorname{sign}\left( \sum_{i=1}^n \alpha_i k(x_i, x) + b \right) \mid \alpha \in \mathbb R^n, b \in \mathbb R \right\}.$$
Note that $\mathsf H_k^X \subset \mathsf H_k$, but we know that the hypothesis the SVM algorithm would pick from $\mathsf H_k$ is in $\mathsf H_k^X$, so that's okay.
The support vectors specifically are the points with $\alpha_i \ne 0$. Which points are support vectors or not depends on the regularization constant and so on, so I wouldn't necessarily say that they're integrally related to the hypothesis class; but the set of possible support vectors, i.e. the training set $X$, of course defines $\mathsf H_k^X$ (along with the kernel $k$). | What is a hypothesis class in SVM?
In classification in general, the hypothesis class is the set of possible classification functions you're considering; the learning algorithm picks a function from the hypothesis class.
For a decision |
55,866 | How to determine if a matrix is close to being negative (semi-)definite? | As Wikipedia says: "A Hermitian $n \times n$ matrix $A$ is defined as being positive-definite (PD) if the scalar $b^T A b$ is positive for every non-zero column vector $b$ of $n$ real numbers". In addition, $A$ can be equivalently be defined as PD in terms of its eigenvalues $\lambda$ (all being positive) and in terms of its Cholesky decomposition $A = LL^T$ (existing).
The fastest way to check therefore if a given matrix $A$ is PD is to check if $A$ has a Cholesky decomposition. If Cholesky decomposition fails, then $A$ is not a PD matrix. Given that $A$ is PD we expect all the diagonal elements of $L$ to be real and strictly positive. The closer they are to $0$ the closer the matrix $A$ is to not being PD. (This relates closely to a fourth characteristics of PD matrices which is to have positive leading principal minors.) More stringently you can also check the eigenvalues of it and see if they are all positive. The closer they are to $0$ the closer $A$ is to not being PD. This also allows a direct interpretation of "why $A$ is not PD". As an eigenvalue gets closer and closer to $0$ this means that the column space spanned by $A$ is shrinking and at some point one of its dimensions collapses onto another, ie. we have at least one column that is the linear combination of other columns in matrix $A$.
Given a matrix $A$ is PD the fastest way to get a matrix "close to" it that it is not PD will be to set on of the diagonal elements of $L$ to 0 (or something very small). That's one way but it is a bit blunt. You will effectively invalidate one or more of $A$'s leading principal minors.
A second way that is more carefully examined would be to use the singular value decomposition $A = USV^T$ and set one of the diagonal elements holding the singular values in $S$ to $0$. This will guarantee that the reconstructed matrix $A^{s_n=0}$ will be the closest matrix to $A$ in term of Frobenius norm (given you choose to "invalidate" the smallest singular value $s$). This method will directly relate to characterisation of $A$ as being PD based on its eigenvalues; in the case of a covariance matrix $A$ the singular values of $A$ equate its eigenvalues so when we set one of the singular values to $0$ we effectively set one of the eigenvalues to $0$. Addendum: We can also use this eigenvalue-based methodology to directly to create near PD matrices without a starting PD matrix $A$ (which is actually the general case). Given any $n \times p$ matrix $B$ we calculate the singular value decomposition $B = USV^T$, set one or more of the diagonal values of $S$ to (near) $0$ values and calculate $B^{s_n = 0} = V S V^T$; $B^{s_n = 0}$ will be PSD.
In R that would be something like:
n = 1000;
p = 10;
set.seed(1);
B = matrix(rnorm(n * p), ncol=p);
svdB = svd(B);
B0 = svdB$v %*% diag(c(svdB$d[1:p-1], .Machine$double.eps)) %*% t(svdB$v)
Notice that floating-point arithmetic issues will kick-in so even if you set one of the diagonal element of $L$ or $S$ to $0.0000000001$ or something very small, you might still end up with non PD matrix. This will relate to the magnitude of the other elements in $L$ or $S$ (and probably in the numerical linear algebra used (OpenBLAS, MKL, etc.)). | How to determine if a matrix is close to being negative (semi-)definite? | As Wikipedia says: "A Hermitian $n \times n$ matrix $A$ is defined as being positive-definite (PD) if the scalar $b^T A b$ is positive for every non-zero column vector $b$ of $n$ real numbers". In add | How to determine if a matrix is close to being negative (semi-)definite?
As Wikipedia says: "A Hermitian $n \times n$ matrix $A$ is defined as being positive-definite (PD) if the scalar $b^T A b$ is positive for every non-zero column vector $b$ of $n$ real numbers". In addition, $A$ can be equivalently be defined as PD in terms of its eigenvalues $\lambda$ (all being positive) and in terms of its Cholesky decomposition $A = LL^T$ (existing).
The fastest way to check therefore if a given matrix $A$ is PD is to check if $A$ has a Cholesky decomposition. If Cholesky decomposition fails, then $A$ is not a PD matrix. Given that $A$ is PD we expect all the diagonal elements of $L$ to be real and strictly positive. The closer they are to $0$ the closer the matrix $A$ is to not being PD. (This relates closely to a fourth characteristics of PD matrices which is to have positive leading principal minors.) More stringently you can also check the eigenvalues of it and see if they are all positive. The closer they are to $0$ the closer $A$ is to not being PD. This also allows a direct interpretation of "why $A$ is not PD". As an eigenvalue gets closer and closer to $0$ this means that the column space spanned by $A$ is shrinking and at some point one of its dimensions collapses onto another, ie. we have at least one column that is the linear combination of other columns in matrix $A$.
Given a matrix $A$ is PD the fastest way to get a matrix "close to" it that it is not PD will be to set on of the diagonal elements of $L$ to 0 (or something very small). That's one way but it is a bit blunt. You will effectively invalidate one or more of $A$'s leading principal minors.
A second way that is more carefully examined would be to use the singular value decomposition $A = USV^T$ and set one of the diagonal elements holding the singular values in $S$ to $0$. This will guarantee that the reconstructed matrix $A^{s_n=0}$ will be the closest matrix to $A$ in term of Frobenius norm (given you choose to "invalidate" the smallest singular value $s$). This method will directly relate to characterisation of $A$ as being PD based on its eigenvalues; in the case of a covariance matrix $A$ the singular values of $A$ equate its eigenvalues so when we set one of the singular values to $0$ we effectively set one of the eigenvalues to $0$. Addendum: We can also use this eigenvalue-based methodology to directly to create near PD matrices without a starting PD matrix $A$ (which is actually the general case). Given any $n \times p$ matrix $B$ we calculate the singular value decomposition $B = USV^T$, set one or more of the diagonal values of $S$ to (near) $0$ values and calculate $B^{s_n = 0} = V S V^T$; $B^{s_n = 0}$ will be PSD.
In R that would be something like:
n = 1000;
p = 10;
set.seed(1);
B = matrix(rnorm(n * p), ncol=p);
svdB = svd(B);
B0 = svdB$v %*% diag(c(svdB$d[1:p-1], .Machine$double.eps)) %*% t(svdB$v)
Notice that floating-point arithmetic issues will kick-in so even if you set one of the diagonal element of $L$ or $S$ to $0.0000000001$ or something very small, you might still end up with non PD matrix. This will relate to the magnitude of the other elements in $L$ or $S$ (and probably in the numerical linear algebra used (OpenBLAS, MKL, etc.)). | How to determine if a matrix is close to being negative (semi-)definite?
As Wikipedia says: "A Hermitian $n \times n$ matrix $A$ is defined as being positive-definite (PD) if the scalar $b^T A b$ is positive for every non-zero column vector $b$ of $n$ real numbers". In add |
55,867 | How to determine if a matrix is close to being negative (semi-)definite? | For a real symmetric matrix $M=P\,\text{diag}(\lambda_1,\dots,\lambda_n)\, P^T$, the nearest (in Frobenius norm) PSD matrix is obtained by thresholding the eigenvalues at 0:$$\text{Proj}_{\mathcal{S}_n^+}(M)=P\,\text{diag}\big(\max(\lambda_1,0),\dots,\max(\lambda_n,0)\big)\, P^T.$$
See e.g. here. Hence, one interpretable measure of $M$'s distance from being PSD is simply the distance of its spectrum from the positive orthant.
So, to create almost PSD matrices, take any orthonormal matrix $P$ and then set $M=PDP^T$ for $D=\text{diag}(\lambda_1,\dots,\lambda_n)$ and the $\lambda_i$ are near to the edge of said orthant. For example, in R:
n = 10
P = qr.Q(qr(matrix(rnorm(n * n), nrow=n)))
D = diag(rnorm(n, sd=.1)^2)
M = P %*% D %*% t(P) | How to determine if a matrix is close to being negative (semi-)definite? | For a real symmetric matrix $M=P\,\text{diag}(\lambda_1,\dots,\lambda_n)\, P^T$, the nearest (in Frobenius norm) PSD matrix is obtained by thresholding the eigenvalues at 0:$$\text{Proj}_{\mathcal{S}_ | How to determine if a matrix is close to being negative (semi-)definite?
For a real symmetric matrix $M=P\,\text{diag}(\lambda_1,\dots,\lambda_n)\, P^T$, the nearest (in Frobenius norm) PSD matrix is obtained by thresholding the eigenvalues at 0:$$\text{Proj}_{\mathcal{S}_n^+}(M)=P\,\text{diag}\big(\max(\lambda_1,0),\dots,\max(\lambda_n,0)\big)\, P^T.$$
See e.g. here. Hence, one interpretable measure of $M$'s distance from being PSD is simply the distance of its spectrum from the positive orthant.
So, to create almost PSD matrices, take any orthonormal matrix $P$ and then set $M=PDP^T$ for $D=\text{diag}(\lambda_1,\dots,\lambda_n)$ and the $\lambda_i$ are near to the edge of said orthant. For example, in R:
n = 10
P = qr.Q(qr(matrix(rnorm(n * n), nrow=n)))
D = diag(rnorm(n, sd=.1)^2)
M = P %*% D %*% t(P) | How to determine if a matrix is close to being negative (semi-)definite?
For a real symmetric matrix $M=P\,\text{diag}(\lambda_1,\dots,\lambda_n)\, P^T$, the nearest (in Frobenius norm) PSD matrix is obtained by thresholding the eigenvalues at 0:$$\text{Proj}_{\mathcal{S}_ |
55,868 | How to show the inconsistency of the OLS estimator for unit-root AR(1) processes by simulation? | You will not be able to show this result (by simulation or otherwise) because it does not hold. When the true AR parameter is unity, the OLS estimator is superconsistent, not inconsistent. See for example the discussion in Hamilton's Time Series Analysis, section "Asymptotic Properties of a First-Order Autoregression when the True Coefficient is Unity" (17.4).
What you can illustrate with simulation is this superconsistency. Simply repeat several Monte Carlo simulations of the OLS estimate (I did just 1000), similar to what was done above, but for a range of sample sizes. See plots below:
What you should observe is that the sample bias of the OLS estimate gets closer to 0 faster in the nonstationary $\phi=1$ case (although it started out larger) and the variance shrinks to zero faster as well. OLS works just fine in that case.
Edit: In the text mentioned above, the following expression is derived in the case where the true coefficient is unity:
$$\sqrt{T}\left(\hat{\rho}_T-1\right) \to^P 0$$
Essentially, $\hat{\rho}_T$ (the OLS estimator) converges to 1 much faster than $\sqrt{T}$ goes to infinity. It's typical for consistent estimators to converge at a slower rate, so this behavior is termed "superconsistency". | How to show the inconsistency of the OLS estimator for unit-root AR(1) processes by simulation? | You will not be able to show this result (by simulation or otherwise) because it does not hold. When the true AR parameter is unity, the OLS estimator is superconsistent, not inconsistent. See for exa | How to show the inconsistency of the OLS estimator for unit-root AR(1) processes by simulation?
You will not be able to show this result (by simulation or otherwise) because it does not hold. When the true AR parameter is unity, the OLS estimator is superconsistent, not inconsistent. See for example the discussion in Hamilton's Time Series Analysis, section "Asymptotic Properties of a First-Order Autoregression when the True Coefficient is Unity" (17.4).
What you can illustrate with simulation is this superconsistency. Simply repeat several Monte Carlo simulations of the OLS estimate (I did just 1000), similar to what was done above, but for a range of sample sizes. See plots below:
What you should observe is that the sample bias of the OLS estimate gets closer to 0 faster in the nonstationary $\phi=1$ case (although it started out larger) and the variance shrinks to zero faster as well. OLS works just fine in that case.
Edit: In the text mentioned above, the following expression is derived in the case where the true coefficient is unity:
$$\sqrt{T}\left(\hat{\rho}_T-1\right) \to^P 0$$
Essentially, $\hat{\rho}_T$ (the OLS estimator) converges to 1 much faster than $\sqrt{T}$ goes to infinity. It's typical for consistent estimators to converge at a slower rate, so this behavior is termed "superconsistency". | How to show the inconsistency of the OLS estimator for unit-root AR(1) processes by simulation?
You will not be able to show this result (by simulation or otherwise) because it does not hold. When the true AR parameter is unity, the OLS estimator is superconsistent, not inconsistent. See for exa |
55,869 | How to show the inconsistency of the OLS estimator for unit-root AR(1) processes by simulation? | The unit root issues in regression are usually associated with the presence of it in the dependent or independent variables. For instance, if you regress one variable on another and both of them have unit roots, then you'll likely to end up with a spurious regression.
Otherwise, the way the answers went on so war was on estimation of the autoregression coefficient $\phi$ in $x_t=c+\phi x_{t-1}+\varepsilon_t$, where the unit root is not an issue at all. It becomes an issue when you try to regress $y\sim x$, where $y$ has a unit root too.
To demonstrate the spurious regression you simply generate the pairs of independent processes $y,x$, regress $y\sim x$ and show how often the regression comes up significant. It should not be significant because the processes are not related, but you'll detect the slope where it shouldn't be. | How to show the inconsistency of the OLS estimator for unit-root AR(1) processes by simulation? | The unit root issues in regression are usually associated with the presence of it in the dependent or independent variables. For instance, if you regress one variable on another and both of them have | How to show the inconsistency of the OLS estimator for unit-root AR(1) processes by simulation?
The unit root issues in regression are usually associated with the presence of it in the dependent or independent variables. For instance, if you regress one variable on another and both of them have unit roots, then you'll likely to end up with a spurious regression.
Otherwise, the way the answers went on so war was on estimation of the autoregression coefficient $\phi$ in $x_t=c+\phi x_{t-1}+\varepsilon_t$, where the unit root is not an issue at all. It becomes an issue when you try to regress $y\sim x$, where $y$ has a unit root too.
To demonstrate the spurious regression you simply generate the pairs of independent processes $y,x$, regress $y\sim x$ and show how often the regression comes up significant. It should not be significant because the processes are not related, but you'll detect the slope where it shouldn't be. | How to show the inconsistency of the OLS estimator for unit-root AR(1) processes by simulation?
The unit root issues in regression are usually associated with the presence of it in the dependent or independent variables. For instance, if you regress one variable on another and both of them have |
55,870 | Expected value of $h(X)$. When can the order of $E$ and $h$ be inverted? | In real analysis and probability theory there is an elegant result called Jensen's inequality. What this says is that for any random variable $X$ and a convex function $h$ we have
$$h\left(\mathbb{E} [X]\right) \leq \mathbb{E} \left[ h(X) \right]$$
and the inequality is reversed if $h$ is concave, i.e.
$$h\left(\mathbb{E} [X]\right) \geq \mathbb{E} \left[ h(X) \right]$$
Your question, when do we have $h\left(\mathbb{E} [X]\right) = \mathbb{E} \left[ h(X) \right]$, can then be answered by considering functions that are both concave and convex at the same time. As pointed out, affine functions clearly meet this requirement, since they are the only concave and convex functions everywhere at the same time. | Expected value of $h(X)$. When can the order of $E$ and $h$ be inverted? | In real analysis and probability theory there is an elegant result called Jensen's inequality. What this says is that for any random variable $X$ and a convex function $h$ we have
$$h\left(\mathbb{E} | Expected value of $h(X)$. When can the order of $E$ and $h$ be inverted?
In real analysis and probability theory there is an elegant result called Jensen's inequality. What this says is that for any random variable $X$ and a convex function $h$ we have
$$h\left(\mathbb{E} [X]\right) \leq \mathbb{E} \left[ h(X) \right]$$
and the inequality is reversed if $h$ is concave, i.e.
$$h\left(\mathbb{E} [X]\right) \geq \mathbb{E} \left[ h(X) \right]$$
Your question, when do we have $h\left(\mathbb{E} [X]\right) = \mathbb{E} \left[ h(X) \right]$, can then be answered by considering functions that are both concave and convex at the same time. As pointed out, affine functions clearly meet this requirement, since they are the only concave and convex functions everywhere at the same time. | Expected value of $h(X)$. When can the order of $E$ and $h$ be inverted?
In real analysis and probability theory there is an elegant result called Jensen's inequality. What this says is that for any random variable $X$ and a convex function $h$ we have
$$h\left(\mathbb{E} |
55,871 | Understanding Word2Vec | what are my actual word vectors in the end?
The actual word vectors are the hidden representations $h$
Basically, multiplying a one hot vector with $\mathbf{W_{V\times N}}$ will give you a $1$$\times$$N$ vector which represents the word vector for the one hot you entered.
Here we multiply the one hot $1$$\times$$5$ for say 'chicken' with synapse 1 $\mathbf{W_{V\times N}}$ to get the vector representation : $1$$\times$$3$
Basically, $\mathbf{W_{V\times N}}$ captures the hidden representations in the form of a look up table. To get the look up value, multiply $\mathbf{W_{V\times N}}$ with the one hot of that word.
That would mean each input matrix $\mathbf{W_{V\times N}}$ would be $(1 \times 10^{11}) \times 300$ in size!?
Yes, that is correct.
Keep in mind 2 things:
It is Google. They have a lot of computational resources.
A lot of optimisations were used to speed up training. You can go through the original code which is publicly available.
Shouldn't it be possible to use lower dimensional vectors?
I assume you mean use a vector like [ 1.2 4.5 4.3] to represent say 'chicken'. Feed that into the network and train on it. Seems like a good idea. I cannot justify the reasoning well enough, but I would like to point out the following:
One Hots allow us to activate only one input neuron at once. So the representation of the word falls down to specific weights just for that word.
Here, the one hot for 'juice' is activating just 4 synaptic links per synapse.
The loss function used is probably Cross Entropy Loss which usually employs one hot representations. This loss function heavily penalises incorrect classifications which is aided by one hot representations. In fact, most classification tasks employ one hots with Cross Entropy Loss.
I know this isn't a satisfactory reasoning.
I hope this clears some things up.
Here are some resources :
The famous article by Chris McCormick
Interactive w2v model : wevi
Understand w2v by understanding it in tensorflow (my article (shameless advertisement,but it covers what I want to say)) | Understanding Word2Vec | what are my actual word vectors in the end?
The actual word vectors are the hidden representations $h$
Basically, multiplying a one hot vector with $\mathbf{W_{V\times N}}$ will give you a $1$$\times | Understanding Word2Vec
what are my actual word vectors in the end?
The actual word vectors are the hidden representations $h$
Basically, multiplying a one hot vector with $\mathbf{W_{V\times N}}$ will give you a $1$$\times$$N$ vector which represents the word vector for the one hot you entered.
Here we multiply the one hot $1$$\times$$5$ for say 'chicken' with synapse 1 $\mathbf{W_{V\times N}}$ to get the vector representation : $1$$\times$$3$
Basically, $\mathbf{W_{V\times N}}$ captures the hidden representations in the form of a look up table. To get the look up value, multiply $\mathbf{W_{V\times N}}$ with the one hot of that word.
That would mean each input matrix $\mathbf{W_{V\times N}}$ would be $(1 \times 10^{11}) \times 300$ in size!?
Yes, that is correct.
Keep in mind 2 things:
It is Google. They have a lot of computational resources.
A lot of optimisations were used to speed up training. You can go through the original code which is publicly available.
Shouldn't it be possible to use lower dimensional vectors?
I assume you mean use a vector like [ 1.2 4.5 4.3] to represent say 'chicken'. Feed that into the network and train on it. Seems like a good idea. I cannot justify the reasoning well enough, but I would like to point out the following:
One Hots allow us to activate only one input neuron at once. So the representation of the word falls down to specific weights just for that word.
Here, the one hot for 'juice' is activating just 4 synaptic links per synapse.
The loss function used is probably Cross Entropy Loss which usually employs one hot representations. This loss function heavily penalises incorrect classifications which is aided by one hot representations. In fact, most classification tasks employ one hots with Cross Entropy Loss.
I know this isn't a satisfactory reasoning.
I hope this clears some things up.
Here are some resources :
The famous article by Chris McCormick
Interactive w2v model : wevi
Understand w2v by understanding it in tensorflow (my article (shameless advertisement,but it covers what I want to say)) | Understanding Word2Vec
what are my actual word vectors in the end?
The actual word vectors are the hidden representations $h$
Basically, multiplying a one hot vector with $\mathbf{W_{V\times N}}$ will give you a $1$$\times |
55,872 | How to fit a generalized logistic function? | Given the binary response $y_i$ and the covariate $x_i$, $i=1,2,\dots,n$, the likelihood for your model is
$$
L(\beta_0,\beta_1,p_\text{min},p_\text{max})=\prod_{i=1}^n p_i^{y_i}(1-p_i)^{1-y_i}
$$
where each
$$
p_i=p_\text{min} + (p_\text{max} - p_\text{min})\frac1{1+\exp(-(\beta_0 + \beta_1 x_i)}.
$$
Just write a function computing the log of this an apply some general purpose optimization algorithm to maximise this numerically with respect to the four parameters. For example, in R do:
# the log likelihood
loglik <- function(par,y,x) {
beta0 <- par[1]
beta1 <- par[2]
pmin <- par[3]
pmax <- par[4]
p <- pmin + (pmax - pmin)*plogis(beta0 + beta1*x)
sum(dbinom(y, size=1, prob=p, log=TRUE))
}
# simulated data
x <- seq(-10,10,len=1000)
y <- rbinom(n=length(x),size=1,prob=.2 + .6*plogis(.5*x))
# fit the model
optim(c(0, 0.5 ,.1, .9), loglik, control=list(fnscale=-1), y=y,x=x, lower=c(-Inf,-Inf,0,0),upper=c(Inf,Inf,1,1))
Note that to test for evidence of a lower plateau at $p_\text{min}$ in your data, your $H_0:p_\text{min}=0$ is at the boundary of the parameter space and the approximate/asymptotic distribution of $2(\log L(\hat\theta_1)-\log L(\hat\theta_0))$ is going to be a mixture of chi-square distributions with 1 and 0 degrees of freedom, see Self, S. G. & Liang, K. Asymptotic properties of maximum likelihood estimators and likelihood ratio tests under nonstandard conditions J. Amer. Statist. Assoc., 1987, 82, 605-610.
In the simpler case that there is only one plateau (so $p_\text{max}=1$ or $p_\text{min}=0$) the model is equivalent to a zero-inflated binary regression model that can be fitted with e.g. the glmmTMB R-package. | How to fit a generalized logistic function? | Given the binary response $y_i$ and the covariate $x_i$, $i=1,2,\dots,n$, the likelihood for your model is
$$
L(\beta_0,\beta_1,p_\text{min},p_\text{max})=\prod_{i=1}^n p_i^{y_i}(1-p_i)^{1-y_i}
$$
whe | How to fit a generalized logistic function?
Given the binary response $y_i$ and the covariate $x_i$, $i=1,2,\dots,n$, the likelihood for your model is
$$
L(\beta_0,\beta_1,p_\text{min},p_\text{max})=\prod_{i=1}^n p_i^{y_i}(1-p_i)^{1-y_i}
$$
where each
$$
p_i=p_\text{min} + (p_\text{max} - p_\text{min})\frac1{1+\exp(-(\beta_0 + \beta_1 x_i)}.
$$
Just write a function computing the log of this an apply some general purpose optimization algorithm to maximise this numerically with respect to the four parameters. For example, in R do:
# the log likelihood
loglik <- function(par,y,x) {
beta0 <- par[1]
beta1 <- par[2]
pmin <- par[3]
pmax <- par[4]
p <- pmin + (pmax - pmin)*plogis(beta0 + beta1*x)
sum(dbinom(y, size=1, prob=p, log=TRUE))
}
# simulated data
x <- seq(-10,10,len=1000)
y <- rbinom(n=length(x),size=1,prob=.2 + .6*plogis(.5*x))
# fit the model
optim(c(0, 0.5 ,.1, .9), loglik, control=list(fnscale=-1), y=y,x=x, lower=c(-Inf,-Inf,0,0),upper=c(Inf,Inf,1,1))
Note that to test for evidence of a lower plateau at $p_\text{min}$ in your data, your $H_0:p_\text{min}=0$ is at the boundary of the parameter space and the approximate/asymptotic distribution of $2(\log L(\hat\theta_1)-\log L(\hat\theta_0))$ is going to be a mixture of chi-square distributions with 1 and 0 degrees of freedom, see Self, S. G. & Liang, K. Asymptotic properties of maximum likelihood estimators and likelihood ratio tests under nonstandard conditions J. Amer. Statist. Assoc., 1987, 82, 605-610.
In the simpler case that there is only one plateau (so $p_\text{max}=1$ or $p_\text{min}=0$) the model is equivalent to a zero-inflated binary regression model that can be fitted with e.g. the glmmTMB R-package. | How to fit a generalized logistic function?
Given the binary response $y_i$ and the covariate $x_i$, $i=1,2,\dots,n$, the likelihood for your model is
$$
L(\beta_0,\beta_1,p_\text{min},p_\text{max})=\prod_{i=1}^n p_i^{y_i}(1-p_i)^{1-y_i}
$$
whe |
55,873 | Is house price prediction a regression or a time series problem? | While the other answer is correct that the response variable can be modelled as a linear regression - you are dealing with house prices. As such, your dataset will likely suffer from what is called time series induced heteroscedasticity.
What this basically means is that since your houses will vary by age - i.e. some houses could be one year old, others over thirty years old, then you will have an unconstant variance across your residuals.
If you see this abstract titled "Heteroscedasticity in hedonic house price models", you will note that using Generalised Least Squares was indicated to remove the heteroscedasticity with forecast errors of a lower standard deviation that would be obtained through standard Ordinary Least Squares.
In summary to your question, your data can be modelled using regression analysis, but you do need to watch out for heteroscedasticity and also serial correlation. Moreover, you might find that your distribution (run a qqPlot to check) may not be normal, and your analysis might be better served through first converting your data to that of a normal distribution using a Box-Cox transformation. | Is house price prediction a regression or a time series problem? | While the other answer is correct that the response variable can be modelled as a linear regression - you are dealing with house prices. As such, your dataset will likely suffer from what is called ti | Is house price prediction a regression or a time series problem?
While the other answer is correct that the response variable can be modelled as a linear regression - you are dealing with house prices. As such, your dataset will likely suffer from what is called time series induced heteroscedasticity.
What this basically means is that since your houses will vary by age - i.e. some houses could be one year old, others over thirty years old, then you will have an unconstant variance across your residuals.
If you see this abstract titled "Heteroscedasticity in hedonic house price models", you will note that using Generalised Least Squares was indicated to remove the heteroscedasticity with forecast errors of a lower standard deviation that would be obtained through standard Ordinary Least Squares.
In summary to your question, your data can be modelled using regression analysis, but you do need to watch out for heteroscedasticity and also serial correlation. Moreover, you might find that your distribution (run a qqPlot to check) may not be normal, and your analysis might be better served through first converting your data to that of a normal distribution using a Box-Cox transformation. | Is house price prediction a regression or a time series problem?
While the other answer is correct that the response variable can be modelled as a linear regression - you are dealing with house prices. As such, your dataset will likely suffer from what is called ti |
55,874 | Is house price prediction a regression or a time series problem? | You needn't choose one or the other. Price can be modeled as a function both of its own past values and of independent variables. The latter may themselves have lagged values that additionally help predict price. | Is house price prediction a regression or a time series problem? | You needn't choose one or the other. Price can be modeled as a function both of its own past values and of independent variables. The latter may themselves have lagged values that additionally help | Is house price prediction a regression or a time series problem?
You needn't choose one or the other. Price can be modeled as a function both of its own past values and of independent variables. The latter may themselves have lagged values that additionally help predict price. | Is house price prediction a regression or a time series problem?
You needn't choose one or the other. Price can be modeled as a function both of its own past values and of independent variables. The latter may themselves have lagged values that additionally help |
55,875 | Is house price prediction a regression or a time series problem? | Your response variable - house price, and predictors - year, city. Am I correct on this? You could model this as a linear regression and predict the house price. | Is house price prediction a regression or a time series problem? | Your response variable - house price, and predictors - year, city. Am I correct on this? You could model this as a linear regression and predict the house price. | Is house price prediction a regression or a time series problem?
Your response variable - house price, and predictors - year, city. Am I correct on this? You could model this as a linear regression and predict the house price. | Is house price prediction a regression or a time series problem?
Your response variable - house price, and predictors - year, city. Am I correct on this? You could model this as a linear regression and predict the house price. |
55,876 | Difference of $R^2$ between OLS with individual dummies to panel fixed effect model? | In essence, yes. The $R^2$ value given for fixed effects regressions is often called the "within $R^2$". If you use stata, the output will give overall, within, and between $R^2$. If you use the plm package in R, it just give the within $R^2$. The basic difference between the overall and within $R^2$ is that the within finds the total sum of squares on the demeaned outcome variable. Fixed effects regression demeans the y for each fixed entity.
For the fixed effects model, $$R^2 = \dfrac{SSR}{TSS_{demeaned \ y}} = \dfrac{\sum(y - \hat y)^2}{\sum([y_i - \bar y_i] - \overline {[y_i - \bar y_i]})^2}$$
To demonstrate in R using the EmplUK data from plm:
> library(plm)
> data("EmplUK")
> fixed <- plm(emp ~ wage + capital, data = EmplUK, index=
c("firm"), model = "within")
> fixed.dum <- lm(emp ~ wage + capital + factor(firm) - 1,
data = EmplUK)
> summary(fixed.dum)$r.squared[1]
summary(fixed)$r.squared[1]
[1] 0.9870826
rsq
0.1635585
>
> #"Within" R2
> SSR <- sum(fixed$residuals^2)
> demeaned_y <- EmplUK$emp -
tapply(EmplUK$emp, EmplUK$firm,mean)[EmplUK$firm]
> TSS_demeaned_y <- sum((demeaned_y-mean(demeaned_y))^2)
> within_R2 <- 1-(SSR/TSS_demeaned_y)
> c(summary(fixed)$r.squared[1], "rsq" = within_R2)
rsq rsq
0.1635585 0.1635585 | Difference of $R^2$ between OLS with individual dummies to panel fixed effect model? | In essence, yes. The $R^2$ value given for fixed effects regressions is often called the "within $R^2$". If you use stata, the output will give overall, within, and between $R^2$. If you use the plm p | Difference of $R^2$ between OLS with individual dummies to panel fixed effect model?
In essence, yes. The $R^2$ value given for fixed effects regressions is often called the "within $R^2$". If you use stata, the output will give overall, within, and between $R^2$. If you use the plm package in R, it just give the within $R^2$. The basic difference between the overall and within $R^2$ is that the within finds the total sum of squares on the demeaned outcome variable. Fixed effects regression demeans the y for each fixed entity.
For the fixed effects model, $$R^2 = \dfrac{SSR}{TSS_{demeaned \ y}} = \dfrac{\sum(y - \hat y)^2}{\sum([y_i - \bar y_i] - \overline {[y_i - \bar y_i]})^2}$$
To demonstrate in R using the EmplUK data from plm:
> library(plm)
> data("EmplUK")
> fixed <- plm(emp ~ wage + capital, data = EmplUK, index=
c("firm"), model = "within")
> fixed.dum <- lm(emp ~ wage + capital + factor(firm) - 1,
data = EmplUK)
> summary(fixed.dum)$r.squared[1]
summary(fixed)$r.squared[1]
[1] 0.9870826
rsq
0.1635585
>
> #"Within" R2
> SSR <- sum(fixed$residuals^2)
> demeaned_y <- EmplUK$emp -
tapply(EmplUK$emp, EmplUK$firm,mean)[EmplUK$firm]
> TSS_demeaned_y <- sum((demeaned_y-mean(demeaned_y))^2)
> within_R2 <- 1-(SSR/TSS_demeaned_y)
> c(summary(fixed)$r.squared[1], "rsq" = within_R2)
rsq rsq
0.1635585 0.1635585 | Difference of $R^2$ between OLS with individual dummies to panel fixed effect model?
In essence, yes. The $R^2$ value given for fixed effects regressions is often called the "within $R^2$". If you use stata, the output will give overall, within, and between $R^2$. If you use the plm p |
55,877 | Difference of $R^2$ between OLS with individual dummies to panel fixed effect model? | I have been looking for the three types of R-squared of the Fixed Effects model outputs in R as well.
Thanks to the help of @paqmo, I was able to manually calculate and reproduce lfe's full and proj R-squared using the model fit from the standard lm package. That said, I am quite certain that the full R-sq is straightforward, meaning R-sq of all pairs of predicted values and original values. At the same time, their proj R-squared is also identical to the so-called within R-squared (definitions from STATA), which is the default reported R-squared in the plm package.
After reading STATA manual Page 10 briefly, I think the full R-sq in lfe and overall R-sq in STATA are the same idea. I see some people said overall R-sq is a weighted average of within and between R-sq, but I did not see any supporting evidence for this statement. I only see that both overall and full R-sq are directly calculated from the pairs of predicted y and original y.
Below are my own calculations for full and proj R-sq.
fe_lm_mod <- lm(formula = "y ~ x1 + x2 + entity - 1",
data = dataframe)
## Calculate prediction
y_predict <- predict(fe_lm_mod, newdata = dataframe)
y_original <- dataframe$y
# Get the valid values indices
notmiss <- which((!is.na(y_predict)) & (!is.na(y_original)))
# Residiual sum of squares
SSres <- sum((y_original[notmiss] - y_predict[notmiss])**2)
# Calculate full R2
SStot_full <- sum((y_original[notmiss] -
mean(y_original[notmiss]))**2)
### get the demean. The within finds the total sum of
### squares on the demeaned outcome variable.
### References
# https://stats.stackexchange.com/questions/262246/difference-of-r2-between-ols-with-individual-dummies-to-panel-fixed-effect-mo
demeaned_y <- y_original[notmiss] -
tapply(y_original[notmiss], dataframe$entity[notmiss],
mean)[dataframe$entity][notmiss]
# Calculate within R2
SStot_within <- sum((demeaned_y-mean(demeaned_y))^2)
print(paste("calculated full R2", 1 - SSres/SStot_full))
print(paste("calculated within R2", 1 - SSres/SStot_within))
For between R-sq, I think the plm package with model="between" may produce between R-sq, but I am not very sure. One can try to calculate it based on the STATA manual, like what I did for full and within R-sq.
So far I made a summary for the R-sq outputs (to be continued):
lm R-sq: not good for Fixed Effects model, cannot reproduce
lfe "full" R-sq: R-sq for all pairs predicted y and original y, may also be called as "overall" R-sq
lfe "proj" R-sq: "within" R-sq: how much of the variation in the dependent variable within each entity group is captured by the model
plm model="within" R-sq: same as 3.
plm model="between" R-sq: "between" R-sq: how much of the variation in the dependent variable between each entity group is captured by the model
plm model="pooling" R-sq: not good for Fixed Effects model. This is the standard OLS R-sq. It is not a Fixed Effects model R-sq. | Difference of $R^2$ between OLS with individual dummies to panel fixed effect model? | I have been looking for the three types of R-squared of the Fixed Effects model outputs in R as well.
Thanks to the help of @paqmo, I was able to manually calculate and reproduce lfe's full and proj R | Difference of $R^2$ between OLS with individual dummies to panel fixed effect model?
I have been looking for the three types of R-squared of the Fixed Effects model outputs in R as well.
Thanks to the help of @paqmo, I was able to manually calculate and reproduce lfe's full and proj R-squared using the model fit from the standard lm package. That said, I am quite certain that the full R-sq is straightforward, meaning R-sq of all pairs of predicted values and original values. At the same time, their proj R-squared is also identical to the so-called within R-squared (definitions from STATA), which is the default reported R-squared in the plm package.
After reading STATA manual Page 10 briefly, I think the full R-sq in lfe and overall R-sq in STATA are the same idea. I see some people said overall R-sq is a weighted average of within and between R-sq, but I did not see any supporting evidence for this statement. I only see that both overall and full R-sq are directly calculated from the pairs of predicted y and original y.
Below are my own calculations for full and proj R-sq.
fe_lm_mod <- lm(formula = "y ~ x1 + x2 + entity - 1",
data = dataframe)
## Calculate prediction
y_predict <- predict(fe_lm_mod, newdata = dataframe)
y_original <- dataframe$y
# Get the valid values indices
notmiss <- which((!is.na(y_predict)) & (!is.na(y_original)))
# Residiual sum of squares
SSres <- sum((y_original[notmiss] - y_predict[notmiss])**2)
# Calculate full R2
SStot_full <- sum((y_original[notmiss] -
mean(y_original[notmiss]))**2)
### get the demean. The within finds the total sum of
### squares on the demeaned outcome variable.
### References
# https://stats.stackexchange.com/questions/262246/difference-of-r2-between-ols-with-individual-dummies-to-panel-fixed-effect-mo
demeaned_y <- y_original[notmiss] -
tapply(y_original[notmiss], dataframe$entity[notmiss],
mean)[dataframe$entity][notmiss]
# Calculate within R2
SStot_within <- sum((demeaned_y-mean(demeaned_y))^2)
print(paste("calculated full R2", 1 - SSres/SStot_full))
print(paste("calculated within R2", 1 - SSres/SStot_within))
For between R-sq, I think the plm package with model="between" may produce between R-sq, but I am not very sure. One can try to calculate it based on the STATA manual, like what I did for full and within R-sq.
So far I made a summary for the R-sq outputs (to be continued):
lm R-sq: not good for Fixed Effects model, cannot reproduce
lfe "full" R-sq: R-sq for all pairs predicted y and original y, may also be called as "overall" R-sq
lfe "proj" R-sq: "within" R-sq: how much of the variation in the dependent variable within each entity group is captured by the model
plm model="within" R-sq: same as 3.
plm model="between" R-sq: "between" R-sq: how much of the variation in the dependent variable between each entity group is captured by the model
plm model="pooling" R-sq: not good for Fixed Effects model. This is the standard OLS R-sq. It is not a Fixed Effects model R-sq. | Difference of $R^2$ between OLS with individual dummies to panel fixed effect model?
I have been looking for the three types of R-squared of the Fixed Effects model outputs in R as well.
Thanks to the help of @paqmo, I was able to manually calculate and reproduce lfe's full and proj R |
55,878 | Hypothesis Testing: Permutation Testing Justification | Generally hypothesis tests are accompanied by extra assumptions that will need to hold (at least when the null is true), so that the null distribution of the test statistic can be obtained; this is as true for nonparametric tests as for parametric ones.
So for example, the usual two sample t-test comes with assumptions of equality of variance and independence - which we rely on when finding the null distribution of the test statistic - even though neither condition is in the hypothesis itself.
For permutation tests you need add at least the assumptions required to obtain exchangeability, though typically we'd use somewhat stronger assumptions (like independence for example). | Hypothesis Testing: Permutation Testing Justification | Generally hypothesis tests are accompanied by extra assumptions that will need to hold (at least when the null is true), so that the null distribution of the test statistic can be obtained; this is as | Hypothesis Testing: Permutation Testing Justification
Generally hypothesis tests are accompanied by extra assumptions that will need to hold (at least when the null is true), so that the null distribution of the test statistic can be obtained; this is as true for nonparametric tests as for parametric ones.
So for example, the usual two sample t-test comes with assumptions of equality of variance and independence - which we rely on when finding the null distribution of the test statistic - even though neither condition is in the hypothesis itself.
For permutation tests you need add at least the assumptions required to obtain exchangeability, though typically we'd use somewhat stronger assumptions (like independence for example). | Hypothesis Testing: Permutation Testing Justification
Generally hypothesis tests are accompanied by extra assumptions that will need to hold (at least when the null is true), so that the null distribution of the test statistic can be obtained; this is as |
55,879 | Hypothesis Testing: Permutation Testing Justification | You are right that you are testing much broader assumption that the group labels are "random" and play no role in your results. As a proxy to test such hypothesis you use some test statistic that is evaluated on the permuted samples. In this case it is median, but it could something else as well. So you test if the labels are exchangeable as tested using median as criterion to assess that. In the end you will learn how "likely" would it be to find different medians in both groups if the group labels played no role in your data. | Hypothesis Testing: Permutation Testing Justification | You are right that you are testing much broader assumption that the group labels are "random" and play no role in your results. As a proxy to test such hypothesis you use some test statistic that is | Hypothesis Testing: Permutation Testing Justification
You are right that you are testing much broader assumption that the group labels are "random" and play no role in your results. As a proxy to test such hypothesis you use some test statistic that is evaluated on the permuted samples. In this case it is median, but it could something else as well. So you test if the labels are exchangeable as tested using median as criterion to assess that. In the end you will learn how "likely" would it be to find different medians in both groups if the group labels played no role in your data. | Hypothesis Testing: Permutation Testing Justification
You are right that you are testing much broader assumption that the group labels are "random" and play no role in your results. As a proxy to test such hypothesis you use some test statistic that is |
55,880 | What is the intuitive sense of the expected value of the sum of two random variables | Recall that for any random variables $X$ and $Y$ with a joint probability distribution function $p(x,y)$, the expected value of $X+Y$ is
$$ \mathbb{E}_{X,Y}[X+Y] = \int_x\int_y (x+y) p(x,y) dydx = \int_x x \int_y p(x,y)dydx + \int_y y \int_x p(x,y) dxdy$$
In the special case that $x$ and $y$ are independent, $p(x,y) = p_X(x)p_Y(y)$ and we can write the integral in the form that you had it, where $p_X$ is the marginal probability distribution of $X$ and $p_Y$ the marginal probability distribution of $Y$:
$$ \mathbb{E}[X+Y] = \int_x\int_y (x+y) p_X(x)p_Y(y) dydx$$
This is where the "product" is coming from - it represents the (infinitesmal) probability that $X=x$ and $Y=y$, which we use to weight $x+y$ appropriately when computing the expected value of $X+Y$. Think of it as searching over all possible combinations of $X$ and $Y$, and for each combination you are evaluating the value of $X+Y$ and weighting it by the probability the combination occurs.
Since you are already adding two random variables, presumably there is a meaning to the sum of the variables. For example if it is travel time, $X$ could be the time it takes for the first trip, $Y$ for the second, and $X+Y$ is the total duration. $\mathbb{E}[X+Y]$ is the expected time of the total trip.
Linearity of expectation might help out a bit for understanding the last question. Remember that $\mathbb{E}[X+Y] = \mathbb{E}[X] + \mathbb{E}[Y]$. For the independent case, if you simplify the two integrals you'll get
$$ \mathbb{E}[X+Y] = \int_X x p_X(x) dx + \int_Y yp_Y(y)dy$$
the first integral is the expected value of $X$, the second term is the expected value of $Y$. In the travel example, you would expected the total trip time to be the expected time of the first trip plus the expected time of the second trip. | What is the intuitive sense of the expected value of the sum of two random variables | Recall that for any random variables $X$ and $Y$ with a joint probability distribution function $p(x,y)$, the expected value of $X+Y$ is
$$ \mathbb{E}_{X,Y}[X+Y] = \int_x\int_y (x+y) p(x,y) dydx = \in | What is the intuitive sense of the expected value of the sum of two random variables
Recall that for any random variables $X$ and $Y$ with a joint probability distribution function $p(x,y)$, the expected value of $X+Y$ is
$$ \mathbb{E}_{X,Y}[X+Y] = \int_x\int_y (x+y) p(x,y) dydx = \int_x x \int_y p(x,y)dydx + \int_y y \int_x p(x,y) dxdy$$
In the special case that $x$ and $y$ are independent, $p(x,y) = p_X(x)p_Y(y)$ and we can write the integral in the form that you had it, where $p_X$ is the marginal probability distribution of $X$ and $p_Y$ the marginal probability distribution of $Y$:
$$ \mathbb{E}[X+Y] = \int_x\int_y (x+y) p_X(x)p_Y(y) dydx$$
This is where the "product" is coming from - it represents the (infinitesmal) probability that $X=x$ and $Y=y$, which we use to weight $x+y$ appropriately when computing the expected value of $X+Y$. Think of it as searching over all possible combinations of $X$ and $Y$, and for each combination you are evaluating the value of $X+Y$ and weighting it by the probability the combination occurs.
Since you are already adding two random variables, presumably there is a meaning to the sum of the variables. For example if it is travel time, $X$ could be the time it takes for the first trip, $Y$ for the second, and $X+Y$ is the total duration. $\mathbb{E}[X+Y]$ is the expected time of the total trip.
Linearity of expectation might help out a bit for understanding the last question. Remember that $\mathbb{E}[X+Y] = \mathbb{E}[X] + \mathbb{E}[Y]$. For the independent case, if you simplify the two integrals you'll get
$$ \mathbb{E}[X+Y] = \int_X x p_X(x) dx + \int_Y yp_Y(y)dy$$
the first integral is the expected value of $X$, the second term is the expected value of $Y$. In the travel example, you would expected the total trip time to be the expected time of the first trip plus the expected time of the second trip. | What is the intuitive sense of the expected value of the sum of two random variables
Recall that for any random variables $X$ and $Y$ with a joint probability distribution function $p(x,y)$, the expected value of $X+Y$ is
$$ \mathbb{E}_{X,Y}[X+Y] = \int_x\int_y (x+y) p(x,y) dydx = \in |
55,881 | What is the intuitive sense of the expected value of the sum of two random variables | If $X$ and $Y$ are random variables (defined on the same probability space: ignore this remark if it confuses you), then we can regard $(X,Y)$ as a random vector (also called a bivariate random variable) and $X$ and $Y$ individually as special kinds of functions of $(X,Y)$ -- called projections or projection maps if you want to use fancy words. Another function of $(X,Y)$ is $X+Y$ (called the "sum" function, of course, what else?) and what this means is that if on a particular trial of the experiment, $X$ and $Y$ took on values $x$ and $y$ respectively (equivalently, $(X,Y)$ had value $(x,y)$), then this sum random variable (denote it by $Z$) has taken on value $x+y$ on this particular trial. There is no notion of "both occurring"; as whuber points out in his comment, you are confusing the concepts of events and random variables.
So, if $W$ is a random variable, what is $E[g(W)]$, the expected value of the function $V = g(W)$ of the random variable $W$? There are two standard ways of finding the answer: if we know, or can determine,
the distribution of $V$, then we can use the definition of expectation. For example, if $V$ is a continuous random variable with pdf $f_V(v)$, then $$E[V] = \int_{-\infty}^\infty v\cdot f_V(v) \,\mathrm dv.$$
Alternatvely, we can use the
Law of the Unconscious Statistician
or LOTUS and determine the value of $E[V]=E[g(W)]$ as
$$E[V]=E[g(W)] = \int_{-\infty}^\infty g(w)\cdot f_W(w) \,\mathrm dw$$
where $f_W(w)$ is the pdf of $W$. Now, LOTUS applies to functions
of bivariate (and more generally multivariate) random variables also,
and we can find $E[Z] =E[X+Y]$ via
$$E[Z]=E[X+Y] = \int_{-\infty}^\infty \int_{-\infty}^\infty (x+y)\cdot f_{X,Y}(x,y) \,\mathrm dx \,\mathrm dy\tag{1}$$ where $f_{X,Y}(x,y)$ is
the joint pdf of $X$ and $Y$ or just the pdf of the bivariate
random variable $(X,Y)$. As a special case, when $X$ and $Y$ are
independent random variables, $f_{X,Y}(x,y) = f_{X}(x)f_{Y}(y)$ for all $x$ and $y$, and so we get the formula shown in your question. But
it is very important that you understand that $(1)$ always holds, regardless of independence etc (for jointly continuous random variables).
A funny thing happens on the way to the forum as one massages the forumula $(1)$. We have that
\begin{align}
E[X+Y] &= \int_{-\infty}^\infty \int_{-\infty}^\infty (x+y)\cdot f_{X,Y}(x,y) \,\mathrm dx \,\mathrm dy\\
&= \int_{-\infty}^\infty \int_{-\infty}^\infty x\cdot f_{X,Y}(x,y) \,\mathrm dx \,\mathrm dy + \int_{-\infty}^\infty \int_{-\infty}^\infty y \cdot f_{X,Y}(x,y) \,\mathrm dx \,\mathrm dy\\
&= \int_{-\infty}^\infty x\int_{-\infty}^\infty f_{X,Y}(x,y) \,\mathrm dy \,\mathrm dx + \int_{-\infty}^\infty y \int_{-\infty}^\infty f_{X,Y}(x,y) \,\mathrm dx \,\mathrm dy\\
&= \int_{-\infty}^\infty x\left[\int_{-\infty}^\infty f_{X,Y}(x,y) \,\mathrm dy \right] \,\mathrm dx + \int_{-\infty}^\infty y \left[\int_{-\infty}^\infty f_{X,Y}(x,y) \,\mathrm dx\right] \,\mathrm dy\\
&= \int_{-\infty}^\infty x \cdot f_{X}(x) \,\mathrm dx + \int_{-\infty}^\infty y\cdot f_{Y}(y) \,\mathrm dy\\
E[X+Y] &= E[X] + E[Y]\tag{2}
\end{align}
The result $(2)$ is a special case of the linearity of expectation
because the argument above can be applied to show that $E[aX+bY] = aE[X]+bE[Y]$, that is, expectation behaves like a linear operation
with respect to random variables: the expectation of a weighted sum
is the weighted sum of the expectations.
Linearity of expectation is a very general result. It holds for all
random variables, not just the jointly continuous ones as in the calculation above, or for independent random variables only as the answer by Stefan Jorgenson seems to be suggesting. | What is the intuitive sense of the expected value of the sum of two random variables | If $X$ and $Y$ are random variables (defined on the same probability space: ignore this remark if it confuses you), then we can regard $(X,Y)$ as a random vector (also called a bivariate random variab | What is the intuitive sense of the expected value of the sum of two random variables
If $X$ and $Y$ are random variables (defined on the same probability space: ignore this remark if it confuses you), then we can regard $(X,Y)$ as a random vector (also called a bivariate random variable) and $X$ and $Y$ individually as special kinds of functions of $(X,Y)$ -- called projections or projection maps if you want to use fancy words. Another function of $(X,Y)$ is $X+Y$ (called the "sum" function, of course, what else?) and what this means is that if on a particular trial of the experiment, $X$ and $Y$ took on values $x$ and $y$ respectively (equivalently, $(X,Y)$ had value $(x,y)$), then this sum random variable (denote it by $Z$) has taken on value $x+y$ on this particular trial. There is no notion of "both occurring"; as whuber points out in his comment, you are confusing the concepts of events and random variables.
So, if $W$ is a random variable, what is $E[g(W)]$, the expected value of the function $V = g(W)$ of the random variable $W$? There are two standard ways of finding the answer: if we know, or can determine,
the distribution of $V$, then we can use the definition of expectation. For example, if $V$ is a continuous random variable with pdf $f_V(v)$, then $$E[V] = \int_{-\infty}^\infty v\cdot f_V(v) \,\mathrm dv.$$
Alternatvely, we can use the
Law of the Unconscious Statistician
or LOTUS and determine the value of $E[V]=E[g(W)]$ as
$$E[V]=E[g(W)] = \int_{-\infty}^\infty g(w)\cdot f_W(w) \,\mathrm dw$$
where $f_W(w)$ is the pdf of $W$. Now, LOTUS applies to functions
of bivariate (and more generally multivariate) random variables also,
and we can find $E[Z] =E[X+Y]$ via
$$E[Z]=E[X+Y] = \int_{-\infty}^\infty \int_{-\infty}^\infty (x+y)\cdot f_{X,Y}(x,y) \,\mathrm dx \,\mathrm dy\tag{1}$$ where $f_{X,Y}(x,y)$ is
the joint pdf of $X$ and $Y$ or just the pdf of the bivariate
random variable $(X,Y)$. As a special case, when $X$ and $Y$ are
independent random variables, $f_{X,Y}(x,y) = f_{X}(x)f_{Y}(y)$ for all $x$ and $y$, and so we get the formula shown in your question. But
it is very important that you understand that $(1)$ always holds, regardless of independence etc (for jointly continuous random variables).
A funny thing happens on the way to the forum as one massages the forumula $(1)$. We have that
\begin{align}
E[X+Y] &= \int_{-\infty}^\infty \int_{-\infty}^\infty (x+y)\cdot f_{X,Y}(x,y) \,\mathrm dx \,\mathrm dy\\
&= \int_{-\infty}^\infty \int_{-\infty}^\infty x\cdot f_{X,Y}(x,y) \,\mathrm dx \,\mathrm dy + \int_{-\infty}^\infty \int_{-\infty}^\infty y \cdot f_{X,Y}(x,y) \,\mathrm dx \,\mathrm dy\\
&= \int_{-\infty}^\infty x\int_{-\infty}^\infty f_{X,Y}(x,y) \,\mathrm dy \,\mathrm dx + \int_{-\infty}^\infty y \int_{-\infty}^\infty f_{X,Y}(x,y) \,\mathrm dx \,\mathrm dy\\
&= \int_{-\infty}^\infty x\left[\int_{-\infty}^\infty f_{X,Y}(x,y) \,\mathrm dy \right] \,\mathrm dx + \int_{-\infty}^\infty y \left[\int_{-\infty}^\infty f_{X,Y}(x,y) \,\mathrm dx\right] \,\mathrm dy\\
&= \int_{-\infty}^\infty x \cdot f_{X}(x) \,\mathrm dx + \int_{-\infty}^\infty y\cdot f_{Y}(y) \,\mathrm dy\\
E[X+Y] &= E[X] + E[Y]\tag{2}
\end{align}
The result $(2)$ is a special case of the linearity of expectation
because the argument above can be applied to show that $E[aX+bY] = aE[X]+bE[Y]$, that is, expectation behaves like a linear operation
with respect to random variables: the expectation of a weighted sum
is the weighted sum of the expectations.
Linearity of expectation is a very general result. It holds for all
random variables, not just the jointly continuous ones as in the calculation above, or for independent random variables only as the answer by Stefan Jorgenson seems to be suggesting. | What is the intuitive sense of the expected value of the sum of two random variables
If $X$ and $Y$ are random variables (defined on the same probability space: ignore this remark if it confuses you), then we can regard $(X,Y)$ as a random vector (also called a bivariate random variab |
55,882 | Why does pre-training help avoid the vanishing gradient problem? | Your question touches on two topics:
Preprocessing of the data.
Initialization of weights. For this question there are already good answers: What are good initial weights in a neural network?.
As for the first question, I shall refer to the paper: LeCun et al., Efficient Backprop, section 4.3. It is explained in great detail, among other issues about training. Nevertheless, some practices have changed since then. For example, ReLus have replaced tanh and sigmoids. | Why does pre-training help avoid the vanishing gradient problem? | Your question touches on two topics:
Preprocessing of the data.
Initialization of weights. For this question there are already good answers: What are good initial weights in a neural network?.
As fo | Why does pre-training help avoid the vanishing gradient problem?
Your question touches on two topics:
Preprocessing of the data.
Initialization of weights. For this question there are already good answers: What are good initial weights in a neural network?.
As for the first question, I shall refer to the paper: LeCun et al., Efficient Backprop, section 4.3. It is explained in great detail, among other issues about training. Nevertheless, some practices have changed since then. For example, ReLus have replaced tanh and sigmoids. | Why does pre-training help avoid the vanishing gradient problem?
Your question touches on two topics:
Preprocessing of the data.
Initialization of weights. For this question there are already good answers: What are good initial weights in a neural network?.
As fo |
55,883 | Why does pre-training help avoid the vanishing gradient problem? | I think it does not solve the vanishing gradient problem. The main difference between DBN and a fully-connected feed-forward neural net is that DBN uses a stack of pre-trained restricted Boltzmann machines to initialize the network’s weights. But the root of the vanishing gradient problem is not about the weight initialization but the activation function used in each neuron, although a good weight initialization sometimes could lead to faster convergence. | Why does pre-training help avoid the vanishing gradient problem? | I think it does not solve the vanishing gradient problem. The main difference between DBN and a fully-connected feed-forward neural net is that DBN uses a stack of pre-trained restricted Boltzmann mac | Why does pre-training help avoid the vanishing gradient problem?
I think it does not solve the vanishing gradient problem. The main difference between DBN and a fully-connected feed-forward neural net is that DBN uses a stack of pre-trained restricted Boltzmann machines to initialize the network’s weights. But the root of the vanishing gradient problem is not about the weight initialization but the activation function used in each neuron, although a good weight initialization sometimes could lead to faster convergence. | Why does pre-training help avoid the vanishing gradient problem?
I think it does not solve the vanishing gradient problem. The main difference between DBN and a fully-connected feed-forward neural net is that DBN uses a stack of pre-trained restricted Boltzmann mac |
55,884 | Forecasting/estimating daily hotel room demand | Yes
Yes - but you incorrectly assume ARIMA is the 'standard". There are no standard models. I'd highly recommend reading a time series book (of which there are a number of excellent free books online). They typically will cover using ARIMA models with external regressors, dynamic regression, ETS models, etc.
NA
Maybe; depends on what your data looks like.
Depending on what you're using the data for and how important forecast accuracy is, there are a number of approaches you'll want to test using time series cross validation and/or test set holdouts. But essentially, you should look at ARIMA models that include external regressor variables for Easter. Holidays do not always fall on the same index day/week due to leap years.
Ideas for approaches to take:
Use daily data and include seasonal regressors for holidays and specify multiple seasonal periods (daily, yearly). Since we know information the model doesn't (holidays) it would be a pretty bad idea not to at least test using them.
You could aggregate the data at a weekly or monthly level, forecast those and then use a distribution pattern by month based on moving average for that month's volume from previous years. For example, day 1 December historically has averaged 3% of the total volume in that month, day 2 gets 2.3%, etc. The value of this method is monthly forecasting is typically more accurate than daily due to the noise to signal level at the daily resolution.
I am really impressed with the recent advances via Temporal Hierarchical Forecasting. There is an implementation of this methodology in the R thief package. This methodology can work really well on high frequency data (daily, weekly data). Still, you'll want to include holidays as external regressors even to this model framework since hotel usage is likely highly impacted by holidays.
Seasonal naive using a linear adjustment up/down based on your year-over-year trend (usually decent to stick with a naive approach to the trend). You'll still need to account for leap years and holidays, as they may not align using this method.
Reading a good practical forecasting book will likely be the best place to start.
EDIT:
Free online practical forecasting book link:
https://www.otexts.org/fpp | Forecasting/estimating daily hotel room demand | Yes
Yes - but you incorrectly assume ARIMA is the 'standard". There are no standard models. I'd highly recommend reading a time series book (of which there are a number of excellent free books online) | Forecasting/estimating daily hotel room demand
Yes
Yes - but you incorrectly assume ARIMA is the 'standard". There are no standard models. I'd highly recommend reading a time series book (of which there are a number of excellent free books online). They typically will cover using ARIMA models with external regressors, dynamic regression, ETS models, etc.
NA
Maybe; depends on what your data looks like.
Depending on what you're using the data for and how important forecast accuracy is, there are a number of approaches you'll want to test using time series cross validation and/or test set holdouts. But essentially, you should look at ARIMA models that include external regressor variables for Easter. Holidays do not always fall on the same index day/week due to leap years.
Ideas for approaches to take:
Use daily data and include seasonal regressors for holidays and specify multiple seasonal periods (daily, yearly). Since we know information the model doesn't (holidays) it would be a pretty bad idea not to at least test using them.
You could aggregate the data at a weekly or monthly level, forecast those and then use a distribution pattern by month based on moving average for that month's volume from previous years. For example, day 1 December historically has averaged 3% of the total volume in that month, day 2 gets 2.3%, etc. The value of this method is monthly forecasting is typically more accurate than daily due to the noise to signal level at the daily resolution.
I am really impressed with the recent advances via Temporal Hierarchical Forecasting. There is an implementation of this methodology in the R thief package. This methodology can work really well on high frequency data (daily, weekly data). Still, you'll want to include holidays as external regressors even to this model framework since hotel usage is likely highly impacted by holidays.
Seasonal naive using a linear adjustment up/down based on your year-over-year trend (usually decent to stick with a naive approach to the trend). You'll still need to account for leap years and holidays, as they may not align using this method.
Reading a good practical forecasting book will likely be the best place to start.
EDIT:
Free online practical forecasting book link:
https://www.otexts.org/fpp | Forecasting/estimating daily hotel room demand
Yes
Yes - but you incorrectly assume ARIMA is the 'standard". There are no standard models. I'd highly recommend reading a time series book (of which there are a number of excellent free books online) |
55,885 | Forecasting/estimating daily hotel room demand | Forecasting daily data is the objective which seems on the surface to be an everyday (pun) standard problem. Standard this ain't ! Even free online texts might not be very helpful as "model identification is the problem/opportunity" . Time series models (ARIMA) incorporating predictor series (X) is the suggested answer where the form of X is to be patiently disciovered. It is known as a Transfer Function and commonly referred to as a Dynamic Regression with ARIMA (XARMAX). ARIMA alone is definitely not the standard as one also needs to incorporate both known and unknown deterministic effects (the X's) as @anscombesgimlet wisely suggested. Smoothing at a higher level of frequency like weeks or months or quarters or years as a "fudge factor" is often (always !) inadequate because of the assumed proportional factors which are in my opinion are often a bad rule of thumb as they often (always) vary over time. Developing daily models that incorporate memory (ARIMA) , daily effects , particular day-of-the-month effects, lead and lag effects around holidays, weekly and monthly effects and even week within month effects , long-weekend effects while dealing with changes in day-of-the-week effects, level/step shifts , local time trends and changes in trends , user-suggested causal variables like weather/price/promotion is not for the weak of heart or those without resources or a lot of coding time on their hands.
Additionally there should be some concern for parameter changes and error variance changes over time as these two are often violated by the "bad data" that really isn't bad but "real-life" and untreated/ignored can throw monkey-wrenches into deficient (standard) analyses.
I became involved in the business of understanding and developing data-based solutions/software for daily data when a "small beer company in St.Louis that had horses" asked to predict (very tactical) daily sales for 50 products for 600,000 retail outlets using any and all known factors such as projected prices and temperature while incorporating possible cannibalization factors. Nothing like a good real-world example to get the juices flowing !. In fact I find that real-world data often drives theoretical development as an impetus that won't go away.
Besides reading what you can find in resources like SE, I suggest that you acquaint yourself with possible solution providers/local statisticians trained in the black-art of time series and deliver a typical data set to them for their fun and pleasure and your education. Search SE for the string "DAILY DATA" and pursue some threads.
You could start by posting one of your time series here and offering a reward for a successful responder. The data doesn't have to be real , it could be coded data . It could be fabricated/simulated to reflect information that lies hidden in the data waiting to be discovered or most likely ignored as the case may be.
As @whuber once opined and I paraphrase from memory "there are a lot of wrong ways to solve a difficult problem and usually only one correct way"
This problem in some ways is more complex than loading beer onto supermarket shelves because hotel occupancy prediction should/must incorporate the "current reservation count" that is known for all future dates and varies over time. This is an interesting twist that is both a complication and an opportunity. It would be interesting to me to find out exactly how this problem is currently being handled by existing methods in order to craft a workable solution.
You shouldn't fret about when holidays occur as most forecasting packages routinely handle that accounting. What you should fret about is "how to detect appropriate lead and lag effects around the holidays" among other things heretofore mentioned.
EDITED 1/20
As an example of what @darXider is suggesting (incorporating fixed effects ) look at http://www.autobox.com/cms/index.php/afs-university/intro-to-forecasting/doc_download/53-capabilities-presentation .. slide 49-68 . Use this as a prototype and even if you decode to roll your own solution examine the approach. Plotting data as you suggested can be pretty time-intensive and very inefficient and would never be sufficient/cost-effective to form useful models for each of your hotels. I would be looking for productivity aids to use honed model identification schemes wherever I could find them. As I suggested you might want to get help from an experienced daily-time series statistician and have them provide guidance to you. AUTOBOX which I helped develop has a data-based solution for this with both SAS and SPSS as two other possibilities. | Forecasting/estimating daily hotel room demand | Forecasting daily data is the objective which seems on the surface to be an everyday (pun) standard problem. Standard this ain't ! Even free online texts might not be very helpful as "model identifica | Forecasting/estimating daily hotel room demand
Forecasting daily data is the objective which seems on the surface to be an everyday (pun) standard problem. Standard this ain't ! Even free online texts might not be very helpful as "model identification is the problem/opportunity" . Time series models (ARIMA) incorporating predictor series (X) is the suggested answer where the form of X is to be patiently disciovered. It is known as a Transfer Function and commonly referred to as a Dynamic Regression with ARIMA (XARMAX). ARIMA alone is definitely not the standard as one also needs to incorporate both known and unknown deterministic effects (the X's) as @anscombesgimlet wisely suggested. Smoothing at a higher level of frequency like weeks or months or quarters or years as a "fudge factor" is often (always !) inadequate because of the assumed proportional factors which are in my opinion are often a bad rule of thumb as they often (always) vary over time. Developing daily models that incorporate memory (ARIMA) , daily effects , particular day-of-the-month effects, lead and lag effects around holidays, weekly and monthly effects and even week within month effects , long-weekend effects while dealing with changes in day-of-the-week effects, level/step shifts , local time trends and changes in trends , user-suggested causal variables like weather/price/promotion is not for the weak of heart or those without resources or a lot of coding time on their hands.
Additionally there should be some concern for parameter changes and error variance changes over time as these two are often violated by the "bad data" that really isn't bad but "real-life" and untreated/ignored can throw monkey-wrenches into deficient (standard) analyses.
I became involved in the business of understanding and developing data-based solutions/software for daily data when a "small beer company in St.Louis that had horses" asked to predict (very tactical) daily sales for 50 products for 600,000 retail outlets using any and all known factors such as projected prices and temperature while incorporating possible cannibalization factors. Nothing like a good real-world example to get the juices flowing !. In fact I find that real-world data often drives theoretical development as an impetus that won't go away.
Besides reading what you can find in resources like SE, I suggest that you acquaint yourself with possible solution providers/local statisticians trained in the black-art of time series and deliver a typical data set to them for their fun and pleasure and your education. Search SE for the string "DAILY DATA" and pursue some threads.
You could start by posting one of your time series here and offering a reward for a successful responder. The data doesn't have to be real , it could be coded data . It could be fabricated/simulated to reflect information that lies hidden in the data waiting to be discovered or most likely ignored as the case may be.
As @whuber once opined and I paraphrase from memory "there are a lot of wrong ways to solve a difficult problem and usually only one correct way"
This problem in some ways is more complex than loading beer onto supermarket shelves because hotel occupancy prediction should/must incorporate the "current reservation count" that is known for all future dates and varies over time. This is an interesting twist that is both a complication and an opportunity. It would be interesting to me to find out exactly how this problem is currently being handled by existing methods in order to craft a workable solution.
You shouldn't fret about when holidays occur as most forecasting packages routinely handle that accounting. What you should fret about is "how to detect appropriate lead and lag effects around the holidays" among other things heretofore mentioned.
EDITED 1/20
As an example of what @darXider is suggesting (incorporating fixed effects ) look at http://www.autobox.com/cms/index.php/afs-university/intro-to-forecasting/doc_download/53-capabilities-presentation .. slide 49-68 . Use this as a prototype and even if you decode to roll your own solution examine the approach. Plotting data as you suggested can be pretty time-intensive and very inefficient and would never be sufficient/cost-effective to form useful models for each of your hotels. I would be looking for productivity aids to use honed model identification schemes wherever I could find them. As I suggested you might want to get help from an experienced daily-time series statistician and have them provide guidance to you. AUTOBOX which I helped develop has a data-based solution for this with both SAS and SPSS as two other possibilities. | Forecasting/estimating daily hotel room demand
Forecasting daily data is the objective which seems on the surface to be an everyday (pun) standard problem. Standard this ain't ! Even free online texts might not be very helpful as "model identifica |
55,886 | Granger test: do I need stationarity? | Some types of nonstationarity are allowed, as long as we can build a model and a testing procedure that account for the specific type of nonstationarity. See Dave Giles' famous blog post "Testing for Granger causality" for the case of unit-root nonstationarity.
But obviously, not all types of nonstationarity can be allowed for. If the time series is too erratic for any model we could manage to build, we will not have a way to carry out the test. | Granger test: do I need stationarity? | Some types of nonstationarity are allowed, as long as we can build a model and a testing procedure that account for the specific type of nonstationarity. See Dave Giles' famous blog post "Testing for | Granger test: do I need stationarity?
Some types of nonstationarity are allowed, as long as we can build a model and a testing procedure that account for the specific type of nonstationarity. See Dave Giles' famous blog post "Testing for Granger causality" for the case of unit-root nonstationarity.
But obviously, not all types of nonstationarity can be allowed for. If the time series is too erratic for any model we could manage to build, we will not have a way to carry out the test. | Granger test: do I need stationarity?
Some types of nonstationarity are allowed, as long as we can build a model and a testing procedure that account for the specific type of nonstationarity. See Dave Giles' famous blog post "Testing for |
55,887 | What's the difference between $\ell_1$-SVM, $\ell_2$-SVM and LS-SVM loss functions? | Consider the input $\{x_i,y_i\}^N:x_i\in \mathbb{R}^p,y_i\in\{-1,1\}$.
The common $\ell_2$-regularized $\ell_1$-SVM minimizes the following loss:
$$
\left\{\begin{matrix}
\min_{w_i,e_i}{1\over2}\|w\|_2^2+{C\over2}\sum_{i=1}^n\xi_i
\\
\xi_i=\max{(0,1-y_i\cdot w\cdot(\phi(x)+b))}\space\forall i
\end{matrix}\right.
$$
Only points at the wrong side of the separating hyperplane are (linearly) penalized by the hinge loss. This provides a maximum margin classifier, or, in other words, the classifier tries to keep points at the right side of the classifying margin, regardless of the distance. A byproduct of the hinge formulation is the possibility of sparsity: the number of non-zero supports of the margin often allows this to be less than $N$. Notice though that the loss is non-differentiable. Primal solutions must make use of the subgradient, while dual solutions result in a quadratic problem.
For the $\ell_2$-regularized $\ell_2$-SVM the loss used is:
$$
\left\{\begin{matrix}
\min_{w_i,e_i}{1\over2}\|w\|_2^2+{C\over2}\sum_{i=1}^n\xi_i^2
\\
\xi_i=\max{(0,1-y_i\cdot w\cdot(\phi(x)+b))}\space\forall i
\end{matrix}\right.
$$
Again, only points at the wrong side of the separating hyperplane are penalized, but now the penalty for violations is more severe: quadratic. Also, this loss (the squared hinge loss) is differentiable, which means solutions in the primal formulation can use standard Gradient Descent techniques. Sparsity is not preserved, or in other words, all inputs are Support Vectors.
For the traditional $\ell_2$-regularized LS-SVM, the following loss is minimized1:
$$
\left\{\begin{matrix}
\min_{w_i,e_i}{1\over2}\|w\|_2^2+{C\over2}\sum_{i=1}^ne_i^2
\\
e_i=y_i-w\cdot(\phi(x)+b)\space\forall i
\end{matrix}\right.
$$
This leads to a linear problem instead of a quadratic problem as in $\ell_1$-SVMs and $\ell_2$-SVMs. But notice the problem penalizes points at either side of the margin, only points which prediction match exactly their true value aren't penalized, so it isn't a maximum margin classifier. The same loss is used for regression and, indeed,the LS-SVM formulation is identical to a Ridge Regression (over Fisher $\{-1,1\}$ labels for classification).
Regression can be analogously compared, but instead usually an epsilon-insensitive loss is used. Regression LS-SVM is Ridge Regression.
[1] Chu, Wei, Chong Jin Ong, and S. Sathiya Keerthi. "A Note on Least Squares Support Vector Machines." | What's the difference between $\ell_1$-SVM, $\ell_2$-SVM and LS-SVM loss functions? | Consider the input $\{x_i,y_i\}^N:x_i\in \mathbb{R}^p,y_i\in\{-1,1\}$.
The common $\ell_2$-regularized $\ell_1$-SVM minimizes the following loss:
$$
\left\{\begin{matrix}
\min_{w_i,e_i}{1\over2}\|w\| | What's the difference between $\ell_1$-SVM, $\ell_2$-SVM and LS-SVM loss functions?
Consider the input $\{x_i,y_i\}^N:x_i\in \mathbb{R}^p,y_i\in\{-1,1\}$.
The common $\ell_2$-regularized $\ell_1$-SVM minimizes the following loss:
$$
\left\{\begin{matrix}
\min_{w_i,e_i}{1\over2}\|w\|_2^2+{C\over2}\sum_{i=1}^n\xi_i
\\
\xi_i=\max{(0,1-y_i\cdot w\cdot(\phi(x)+b))}\space\forall i
\end{matrix}\right.
$$
Only points at the wrong side of the separating hyperplane are (linearly) penalized by the hinge loss. This provides a maximum margin classifier, or, in other words, the classifier tries to keep points at the right side of the classifying margin, regardless of the distance. A byproduct of the hinge formulation is the possibility of sparsity: the number of non-zero supports of the margin often allows this to be less than $N$. Notice though that the loss is non-differentiable. Primal solutions must make use of the subgradient, while dual solutions result in a quadratic problem.
For the $\ell_2$-regularized $\ell_2$-SVM the loss used is:
$$
\left\{\begin{matrix}
\min_{w_i,e_i}{1\over2}\|w\|_2^2+{C\over2}\sum_{i=1}^n\xi_i^2
\\
\xi_i=\max{(0,1-y_i\cdot w\cdot(\phi(x)+b))}\space\forall i
\end{matrix}\right.
$$
Again, only points at the wrong side of the separating hyperplane are penalized, but now the penalty for violations is more severe: quadratic. Also, this loss (the squared hinge loss) is differentiable, which means solutions in the primal formulation can use standard Gradient Descent techniques. Sparsity is not preserved, or in other words, all inputs are Support Vectors.
For the traditional $\ell_2$-regularized LS-SVM, the following loss is minimized1:
$$
\left\{\begin{matrix}
\min_{w_i,e_i}{1\over2}\|w\|_2^2+{C\over2}\sum_{i=1}^ne_i^2
\\
e_i=y_i-w\cdot(\phi(x)+b)\space\forall i
\end{matrix}\right.
$$
This leads to a linear problem instead of a quadratic problem as in $\ell_1$-SVMs and $\ell_2$-SVMs. But notice the problem penalizes points at either side of the margin, only points which prediction match exactly their true value aren't penalized, so it isn't a maximum margin classifier. The same loss is used for regression and, indeed,the LS-SVM formulation is identical to a Ridge Regression (over Fisher $\{-1,1\}$ labels for classification).
Regression can be analogously compared, but instead usually an epsilon-insensitive loss is used. Regression LS-SVM is Ridge Regression.
[1] Chu, Wei, Chong Jin Ong, and S. Sathiya Keerthi. "A Note on Least Squares Support Vector Machines." | What's the difference between $\ell_1$-SVM, $\ell_2$-SVM and LS-SVM loss functions?
Consider the input $\{x_i,y_i\}^N:x_i\in \mathbb{R}^p,y_i\in\{-1,1\}$.
The common $\ell_2$-regularized $\ell_1$-SVM minimizes the following loss:
$$
\left\{\begin{matrix}
\min_{w_i,e_i}{1\over2}\|w\| |
55,888 | How and When to Use Marginalization in Stan | Stan only samples from continuous parameter spaces, so for something like a finite mixture model, it is necessary to do marginalization to use Stan. On the other hand, if you have a hierarchical model where a small number of parameters control the distribution of a large number of parameters, marginalization is probably not necessary.
Since marginalization, whether over discrete or continuous latent variables, reduces the dimension of the parameter space, one could expect that you would get a speed up in sampling. However, if you don't draw samples from the full posterior distribution, then that limits what you can do inference on. Also, the integration is usually not trivial, so there is an associated cost. Whether it would make sense for you depends on your problem and objective. | How and When to Use Marginalization in Stan | Stan only samples from continuous parameter spaces, so for something like a finite mixture model, it is necessary to do marginalization to use Stan. On the other hand, if you have a hierarchical model | How and When to Use Marginalization in Stan
Stan only samples from continuous parameter spaces, so for something like a finite mixture model, it is necessary to do marginalization to use Stan. On the other hand, if you have a hierarchical model where a small number of parameters control the distribution of a large number of parameters, marginalization is probably not necessary.
Since marginalization, whether over discrete or continuous latent variables, reduces the dimension of the parameter space, one could expect that you would get a speed up in sampling. However, if you don't draw samples from the full posterior distribution, then that limits what you can do inference on. Also, the integration is usually not trivial, so there is an associated cost. Whether it would make sense for you depends on your problem and objective. | How and When to Use Marginalization in Stan
Stan only samples from continuous parameter spaces, so for something like a finite mixture model, it is necessary to do marginalization to use Stan. On the other hand, if you have a hierarchical model |
55,889 | Modeling reaction time with glmer | The high AIC value is not a problem in itself: the AIC is a measure of the relative quality of a model; it says something about how good is the fit of a model but only with respect to another model fit on the same dataset. You cannot compare it to other RT models if these are not fit on the same dataset because - for example - the AIC will depend also on the number of observation.
The residual plot looks fine to me: the fitted values are discretized in specific values because of the categorical nature of your predictor variables (time and treatment). More specifically, because your predictors are 2 categorical variables with 2 levels each, the model can only predict a different value for each combination of the two variables, hence it will output 4 different values for each subject. It would be different if you had also a continuous variable as predictors, e.g. the number of hours slept.
The only issue with the residual plot may be that the distribution of residuals seems slightly skewed toward positive values. This is due to the fact that reaction times distributions are, almost always, skewed, and could be fixed in principle with a parametric transformation of the dependent variable, such as the Box-Cox power transform. However since you prefer to avoid transformations, you may want to try also to fit the data with the gamma link function, to see if it the distribution of residuals improve. You can choose the preferred model (gamma vs. inverse Gaussian) based on the AIC, as the two models are fit on the same dataset.
Contrary to the previous answer, I think it can make sense to include in the model the interaction term treatment:timep (or at least to test it, then eventually remove it afterward if it does not improve the fit). Without the interaction, the model can only predict a purely additive combination of the effect of sleep and treatment: in other words, the model assumes that the effect of treatment is the same regardless of whether the improvement in reaction times due to sleep was large or small. This assumption, however, may not be true: for example, the effect of treatment may be larger for those subjects that show smaller benefit due to sleep alone (this would make sense if you consider that there may be a ceiling effect in the reaction time benefit, i.e. if reaction times cannot improve more than a certain amount). The interaction term treatment:timep would allow you to model such effects.
One last comment, your model only includes random intercepts for each subject. This means that while the marginal mean of the RTs is allowed to vary across participants, the effects of sleep or treatments are not. If you suspect that these effects may vary largely across subjects, it may be worth trying to fit random slopes for these effects. For example, if you want to model individual differences in the effect of sleep as a random effect, the random effect part of the formula would be (1+timep|subjectNumber). Of course, this will increase the complexity of the model, and the number of parameters estimated, and may results in issues during the estimation of the parameters (e.g. convergence warnings). You could start from the 'maximal' model, with all possible random effects included (the formula would be (treatment * timep | subjectNumber)) and then proceed by simplyfying it if the estimation does not converge. For some advice on how to chose the right level of complexity given the data, I suggest you check this article by Bates & colleagues. | Modeling reaction time with glmer | The high AIC value is not a problem in itself: the AIC is a measure of the relative quality of a model; it says something about how good is the fit of a model but only with respect to another model fi | Modeling reaction time with glmer
The high AIC value is not a problem in itself: the AIC is a measure of the relative quality of a model; it says something about how good is the fit of a model but only with respect to another model fit on the same dataset. You cannot compare it to other RT models if these are not fit on the same dataset because - for example - the AIC will depend also on the number of observation.
The residual plot looks fine to me: the fitted values are discretized in specific values because of the categorical nature of your predictor variables (time and treatment). More specifically, because your predictors are 2 categorical variables with 2 levels each, the model can only predict a different value for each combination of the two variables, hence it will output 4 different values for each subject. It would be different if you had also a continuous variable as predictors, e.g. the number of hours slept.
The only issue with the residual plot may be that the distribution of residuals seems slightly skewed toward positive values. This is due to the fact that reaction times distributions are, almost always, skewed, and could be fixed in principle with a parametric transformation of the dependent variable, such as the Box-Cox power transform. However since you prefer to avoid transformations, you may want to try also to fit the data with the gamma link function, to see if it the distribution of residuals improve. You can choose the preferred model (gamma vs. inverse Gaussian) based on the AIC, as the two models are fit on the same dataset.
Contrary to the previous answer, I think it can make sense to include in the model the interaction term treatment:timep (or at least to test it, then eventually remove it afterward if it does not improve the fit). Without the interaction, the model can only predict a purely additive combination of the effect of sleep and treatment: in other words, the model assumes that the effect of treatment is the same regardless of whether the improvement in reaction times due to sleep was large or small. This assumption, however, may not be true: for example, the effect of treatment may be larger for those subjects that show smaller benefit due to sleep alone (this would make sense if you consider that there may be a ceiling effect in the reaction time benefit, i.e. if reaction times cannot improve more than a certain amount). The interaction term treatment:timep would allow you to model such effects.
One last comment, your model only includes random intercepts for each subject. This means that while the marginal mean of the RTs is allowed to vary across participants, the effects of sleep or treatments are not. If you suspect that these effects may vary largely across subjects, it may be worth trying to fit random slopes for these effects. For example, if you want to model individual differences in the effect of sleep as a random effect, the random effect part of the formula would be (1+timep|subjectNumber). Of course, this will increase the complexity of the model, and the number of parameters estimated, and may results in issues during the estimation of the parameters (e.g. convergence warnings). You could start from the 'maximal' model, with all possible random effects included (the formula would be (treatment * timep | subjectNumber)) and then proceed by simplyfying it if the estimation does not converge. For some advice on how to chose the right level of complexity given the data, I suggest you check this article by Bates & colleagues. | Modeling reaction time with glmer
The high AIC value is not a problem in itself: the AIC is a measure of the relative quality of a model; it says something about how good is the fit of a model but only with respect to another model fi |
55,890 | Modeling reaction time with glmer | It is not correct to say what you propose. The point estimate of the effect of treatments is 4.614. So the model is not saying "no effect". The standard error of that estimate is 2.709, so the p-value is 0.08 (which many people would say was suggestive of "significance".) You only had 22 subjects (presumably 11 per group, so making many observations on a small number of subjects does not protect you from the "power" difficulties imposed by that sample size.
The results would allow you to say that time after sleep has an effect at conventional levels of significance and that measured effect (7.745 difference) is probably larger than the size of the treatment effect.
You included an interaction term for which I did not see the need. I would try to compare two models:
glmer(reaction ~ timep + treatment + (1|subjectNumber), data=., family = inverse.gaussian(link = "identity"))
glmer(reaction ~ timep + (1|subjectNumber), data=., family = inverse.gaussian(link = "identity")) | Modeling reaction time with glmer | It is not correct to say what you propose. The point estimate of the effect of treatments is 4.614. So the model is not saying "no effect". The standard error of that estimate is 2.709, so the p-value | Modeling reaction time with glmer
It is not correct to say what you propose. The point estimate of the effect of treatments is 4.614. So the model is not saying "no effect". The standard error of that estimate is 2.709, so the p-value is 0.08 (which many people would say was suggestive of "significance".) You only had 22 subjects (presumably 11 per group, so making many observations on a small number of subjects does not protect you from the "power" difficulties imposed by that sample size.
The results would allow you to say that time after sleep has an effect at conventional levels of significance and that measured effect (7.745 difference) is probably larger than the size of the treatment effect.
You included an interaction term for which I did not see the need. I would try to compare two models:
glmer(reaction ~ timep + treatment + (1|subjectNumber), data=., family = inverse.gaussian(link = "identity"))
glmer(reaction ~ timep + (1|subjectNumber), data=., family = inverse.gaussian(link = "identity")) | Modeling reaction time with glmer
It is not correct to say what you propose. The point estimate of the effect of treatments is 4.614. So the model is not saying "no effect". The standard error of that estimate is 2.709, so the p-value |
55,891 | Asymptotic normality: do the following convergences hold? | This can be solved from basic principles. At the end I'll explain the underlying idea.
Let $X_n$ be a sequence of iid standard Normal variables and, independently of it, let $Y_n$ also be such a sequence. Independently of both of them let $U_n$ be a sequence of independent Bernoulli variables with parameter $1/n$: that is, $U_n$ has a probability $1/n$ of equalling $1$ and otherwise is $0$. Pick a number $p$ (to be determined below) and define
$$Z_n = U_n(Y_n + n^p) + (1-U_n) X_n.$$
Each $Z_n$ is a mixture of a standard Normal (namely $X_n$) and a standard Normal shifted to $n^p$ (namely $Y_n+n^p$). Compute the mean and variance of $Z_n$:
$$\mu_n = n^{p-1};\quad \sigma^2_n = 1 + (n-1)n^{2(p-1)}.$$
The distribution of $Z_n$ approaches a standard Normal distribution $\Phi$ because its distribution function is
$$F_{Z_n}(z) = \frac{n-1}{n}\Phi(z) + \frac{1}{n}\Phi(z-n^p) \to \Phi(z).$$
Consequently $a_n=0$ and $b_n=1$ will work, since $(Z_n-a_n)/b_n=Z_n$. Nevertheless,
$$\frac{\sigma_n^2}{b_n^2} = \frac{1 + (n-1)n^{2(p-1)}}{1} \approx n^{2p-1}$$
diverges for $p \gt 1/2$ and
$$\frac{\mu_n-a_n}{b_n} = \frac{n^{p-1}}{1}$$
diverges for $p \gt 1$.
What has happened is that moving a vanishing bit of the total probability ($1/n$ of it) doesn't change the limiting distribution, but spreading the two components far enough to counterbalance that small probability (by selecting a sufficiently large $p$) allows us to control the mean and variance and even make them both diverge. | Asymptotic normality: do the following convergences hold? | This can be solved from basic principles. At the end I'll explain the underlying idea.
Let $X_n$ be a sequence of iid standard Normal variables and, independently of it, let $Y_n$ also be such a sequ | Asymptotic normality: do the following convergences hold?
This can be solved from basic principles. At the end I'll explain the underlying idea.
Let $X_n$ be a sequence of iid standard Normal variables and, independently of it, let $Y_n$ also be such a sequence. Independently of both of them let $U_n$ be a sequence of independent Bernoulli variables with parameter $1/n$: that is, $U_n$ has a probability $1/n$ of equalling $1$ and otherwise is $0$. Pick a number $p$ (to be determined below) and define
$$Z_n = U_n(Y_n + n^p) + (1-U_n) X_n.$$
Each $Z_n$ is a mixture of a standard Normal (namely $X_n$) and a standard Normal shifted to $n^p$ (namely $Y_n+n^p$). Compute the mean and variance of $Z_n$:
$$\mu_n = n^{p-1};\quad \sigma^2_n = 1 + (n-1)n^{2(p-1)}.$$
The distribution of $Z_n$ approaches a standard Normal distribution $\Phi$ because its distribution function is
$$F_{Z_n}(z) = \frac{n-1}{n}\Phi(z) + \frac{1}{n}\Phi(z-n^p) \to \Phi(z).$$
Consequently $a_n=0$ and $b_n=1$ will work, since $(Z_n-a_n)/b_n=Z_n$. Nevertheless,
$$\frac{\sigma_n^2}{b_n^2} = \frac{1 + (n-1)n^{2(p-1)}}{1} \approx n^{2p-1}$$
diverges for $p \gt 1/2$ and
$$\frac{\mu_n-a_n}{b_n} = \frac{n^{p-1}}{1}$$
diverges for $p \gt 1$.
What has happened is that moving a vanishing bit of the total probability ($1/n$ of it) doesn't change the limiting distribution, but spreading the two components far enough to counterbalance that small probability (by selecting a sufficiently large $p$) allows us to control the mean and variance and even make them both diverge. | Asymptotic normality: do the following convergences hold?
This can be solved from basic principles. At the end I'll explain the underlying idea.
Let $X_n$ be a sequence of iid standard Normal variables and, independently of it, let $Y_n$ also be such a sequ |
55,892 | Why do we need to take the transpose of the data for PCA? | We do not need to.
It is a common and long-standing convention in statistics that data matrices have observations in rows and variables in columns. In your case, you indeed have $1000$ observations of $9$ variables. So it would be standard to organize your data in a matrix of $1000\times 9$ size. Most standard PCA implementations will expect to get such an input.
For example, pca() function in Matlab says this on its help page:
coeff = pca(X) returns the principal component coefficients, also known as loadings, for the $n$-by-$p$ data matrix X. Rows of X correspond to observations and columns correspond to variables. The coefficient matrix is $p$-by-$p$.
But if you write your own code for PCA, you are free to follow an opposite convention and store variables in rows. I often did it myself this way. | Why do we need to take the transpose of the data for PCA? | We do not need to.
It is a common and long-standing convention in statistics that data matrices have observations in rows and variables in columns. In your case, you indeed have $1000$ observations of | Why do we need to take the transpose of the data for PCA?
We do not need to.
It is a common and long-standing convention in statistics that data matrices have observations in rows and variables in columns. In your case, you indeed have $1000$ observations of $9$ variables. So it would be standard to organize your data in a matrix of $1000\times 9$ size. Most standard PCA implementations will expect to get such an input.
For example, pca() function in Matlab says this on its help page:
coeff = pca(X) returns the principal component coefficients, also known as loadings, for the $n$-by-$p$ data matrix X. Rows of X correspond to observations and columns correspond to variables. The coefficient matrix is $p$-by-$p$.
But if you write your own code for PCA, you are free to follow an opposite convention and store variables in rows. I often did it myself this way. | Why do we need to take the transpose of the data for PCA?
We do not need to.
It is a common and long-standing convention in statistics that data matrices have observations in rows and variables in columns. In your case, you indeed have $1000$ observations of |
55,893 | Durbin-Watson test and biological (non time-series) data | (1) There is some correlation in the ordering of the observations. In this case, (part of) the reason is that the observations are ordered by Cult (a factor indicating the cultivator of the cabbages). And because the first cultivator is mostly associated with negative residuals and the second cultivator mostly with positive residuals, this pattern will be picked up by diagnostic tests. It might look like a "trend" or like "autocorrelation" if this is all the tests look for.
(2) Linear regression itself seems to work ok. But it is important to control for Cult and not only for HeadWt. Possibly Date could be relevant as well. It would also be good to check what the MASS book says about the data (my copy is in the office, hence I can't check right now).
(3) No. The Durbin-Watson is appropriate if you have correlations over "time" or some other kind of natural ordering of the observations. And even then there might be other autocorrelation tests that could be more suitable. | Durbin-Watson test and biological (non time-series) data | (1) There is some correlation in the ordering of the observations. In this case, (part of) the reason is that the observations are ordered by Cult (a factor indicating the cultivator of the cabbages). | Durbin-Watson test and biological (non time-series) data
(1) There is some correlation in the ordering of the observations. In this case, (part of) the reason is that the observations are ordered by Cult (a factor indicating the cultivator of the cabbages). And because the first cultivator is mostly associated with negative residuals and the second cultivator mostly with positive residuals, this pattern will be picked up by diagnostic tests. It might look like a "trend" or like "autocorrelation" if this is all the tests look for.
(2) Linear regression itself seems to work ok. But it is important to control for Cult and not only for HeadWt. Possibly Date could be relevant as well. It would also be good to check what the MASS book says about the data (my copy is in the office, hence I can't check right now).
(3) No. The Durbin-Watson is appropriate if you have correlations over "time" or some other kind of natural ordering of the observations. And even then there might be other autocorrelation tests that could be more suitable. | Durbin-Watson test and biological (non time-series) data
(1) There is some correlation in the ordering of the observations. In this case, (part of) the reason is that the observations are ordered by Cult (a factor indicating the cultivator of the cabbages). |
55,894 | How to gain knowledge from dataset using regressions in R [closed] | The most important thing to do is for you to check if the model makes sense. You have fit a linear model to three continuous predictors, you need to make sure that it makes sense to do so You should look at scatterplots of age, height, and weight against y, and adjust the fits of these predictors if needed.
Assuming fitting these predictors linearly is reasonable, fitting the full model with all four predictors is a sensible thing to do.
You have only 25 data points. If you go on a long search through the space of all models (adding and removing variables) you have an extremely high risk of false positives. So, I don't think there is much need to backwards select out variables; if you wish to do so, make sure you use cross validation to make sure doing so improves the fit of the model to unseen data.
The same thing applies to a search for interactions, you have little data, and you are running a large risk of false positives.
If you wish to make inferences using the estimated confidence intervals, you should additionally check a plot of the residuals vs. the fitted values of the model and make sure you do not see any patterns. You're looking to see if they look like they could have been drawn from a normal distribution with constant variance. If this looks reasonably consistent with yor data, then you can make inferences about the graduation parameter using the linear model
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 140.1689 8.3191 16.849 2.77e-13
x1 -1.1428 0.1904 -6.002 7.22e-06
x2 -0.4699 0.1866 -2.518 0.0204
x3yes 2.2259 4.1402 0.538 0.5968
x4 1.2673 1.4922 0.849 0.4058
The x3 variable measures graduation, and its parameter lies well within the error of its estimation. So, given that everything above checks out, the data you used to train the model is not inconsistent with the effect of graduation being indistinguishable from noise.
Thanks so are we really able to judge this just from fitting the full model?
As long as all the caveats are met, I do think the best way to go about this is to fit the full model, and make your inference from that. Like I said, any inference you draw from a model that does variable selection is likely to occurr by chance.
Another way to think about this is, if you go through a variable selection algorithm, the standard errors reported in the model are no longer correct, they are actually much larger than what is reported. To estimate the true standard errors of the parameter estimates under a selection / fitting procedure, you would need to use either nested cross validation or a bootstrap + cross validation. This would drive your data very, very thin, and incur a lot of variance (you are making lots of decisions, each has a chance to be wrong). Your standard errors would be enormous. | How to gain knowledge from dataset using regressions in R [closed] | The most important thing to do is for you to check if the model makes sense. You have fit a linear model to three continuous predictors, you need to make sure that it makes sense to do so You should | How to gain knowledge from dataset using regressions in R [closed]
The most important thing to do is for you to check if the model makes sense. You have fit a linear model to three continuous predictors, you need to make sure that it makes sense to do so You should look at scatterplots of age, height, and weight against y, and adjust the fits of these predictors if needed.
Assuming fitting these predictors linearly is reasonable, fitting the full model with all four predictors is a sensible thing to do.
You have only 25 data points. If you go on a long search through the space of all models (adding and removing variables) you have an extremely high risk of false positives. So, I don't think there is much need to backwards select out variables; if you wish to do so, make sure you use cross validation to make sure doing so improves the fit of the model to unseen data.
The same thing applies to a search for interactions, you have little data, and you are running a large risk of false positives.
If you wish to make inferences using the estimated confidence intervals, you should additionally check a plot of the residuals vs. the fitted values of the model and make sure you do not see any patterns. You're looking to see if they look like they could have been drawn from a normal distribution with constant variance. If this looks reasonably consistent with yor data, then you can make inferences about the graduation parameter using the linear model
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 140.1689 8.3191 16.849 2.77e-13
x1 -1.1428 0.1904 -6.002 7.22e-06
x2 -0.4699 0.1866 -2.518 0.0204
x3yes 2.2259 4.1402 0.538 0.5968
x4 1.2673 1.4922 0.849 0.4058
The x3 variable measures graduation, and its parameter lies well within the error of its estimation. So, given that everything above checks out, the data you used to train the model is not inconsistent with the effect of graduation being indistinguishable from noise.
Thanks so are we really able to judge this just from fitting the full model?
As long as all the caveats are met, I do think the best way to go about this is to fit the full model, and make your inference from that. Like I said, any inference you draw from a model that does variable selection is likely to occurr by chance.
Another way to think about this is, if you go through a variable selection algorithm, the standard errors reported in the model are no longer correct, they are actually much larger than what is reported. To estimate the true standard errors of the parameter estimates under a selection / fitting procedure, you would need to use either nested cross validation or a bootstrap + cross validation. This would drive your data very, very thin, and incur a lot of variance (you are making lots of decisions, each has a chance to be wrong). Your standard errors would be enormous. | How to gain knowledge from dataset using regressions in R [closed]
The most important thing to do is for you to check if the model makes sense. You have fit a linear model to three continuous predictors, you need to make sure that it makes sense to do so You should |
55,895 | How to gain knowledge from dataset using regressions in R [closed] | Because there can be dependencies between the predictor variables, it is possible that say X1 looks significant when X2 is left out. But, because X1 and X2 are highly dependent, X1 may appear non-significant when X2 is included in the model. With four predictor variables, there are $2^4 -1$ possible non-empty models. As this is only 15, it is not difficult to look at all subsets. If the number of variables was much larger, a step-wise approach should be adequate. If possible, pick a model where all the coefficients are significant and if you have 2 highly correlated variables make sure that one is excluded. | How to gain knowledge from dataset using regressions in R [closed] | Because there can be dependencies between the predictor variables, it is possible that say X1 looks significant when X2 is left out. But, because X1 and X2 are highly dependent, X1 may appear non-sign | How to gain knowledge from dataset using regressions in R [closed]
Because there can be dependencies between the predictor variables, it is possible that say X1 looks significant when X2 is left out. But, because X1 and X2 are highly dependent, X1 may appear non-significant when X2 is included in the model. With four predictor variables, there are $2^4 -1$ possible non-empty models. As this is only 15, it is not difficult to look at all subsets. If the number of variables was much larger, a step-wise approach should be adequate. If possible, pick a model where all the coefficients are significant and if you have 2 highly correlated variables make sure that one is excluded. | How to gain knowledge from dataset using regressions in R [closed]
Because there can be dependencies between the predictor variables, it is possible that say X1 looks significant when X2 is left out. But, because X1 and X2 are highly dependent, X1 may appear non-sign |
55,896 | How to transform a unit root process to a stationary process? | If a process has a unit root (a stochastic trend) and you want to make it stationary, you need to difference it. In other words, if $x_t\sim I(1)$, then $\Delta x_t:=x_t-x_{t-1} \sim I(0)$.
Without differencing you will not get rid of the unit root. E.g. subtracting a deterministic trend will not help because a unit root produces a stochastic trend; so you might end up with a combination of a stochastic and a deterministic trend in the end (the latter being introduced by subtracting a deterministic trend).
If you did not difference and the process truly has a unit root, then no wonder that whatever you tried has failed to produce a stationary transformation. | How to transform a unit root process to a stationary process? | If a process has a unit root (a stochastic trend) and you want to make it stationary, you need to difference it. In other words, if $x_t\sim I(1)$, then $\Delta x_t:=x_t-x_{t-1} \sim I(0)$.
Without d | How to transform a unit root process to a stationary process?
If a process has a unit root (a stochastic trend) and you want to make it stationary, you need to difference it. In other words, if $x_t\sim I(1)$, then $\Delta x_t:=x_t-x_{t-1} \sim I(0)$.
Without differencing you will not get rid of the unit root. E.g. subtracting a deterministic trend will not help because a unit root produces a stochastic trend; so you might end up with a combination of a stochastic and a deterministic trend in the end (the latter being introduced by subtracting a deterministic trend).
If you did not difference and the process truly has a unit root, then no wonder that whatever you tried has failed to produce a stationary transformation. | How to transform a unit root process to a stationary process?
If a process has a unit root (a stochastic trend) and you want to make it stationary, you need to difference it. In other words, if $x_t\sim I(1)$, then $\Delta x_t:=x_t-x_{t-1} \sim I(0)$.
Without d |
55,897 | How to transform a unit root process to a stationary process? | taking the difference of the time series is NOT the only way to detrend a time series and to remove the unit root. the cointegration should be able to serve the same purpose. | How to transform a unit root process to a stationary process? | taking the difference of the time series is NOT the only way to detrend a time series and to remove the unit root. the cointegration should be able to serve the same purpose. | How to transform a unit root process to a stationary process?
taking the difference of the time series is NOT the only way to detrend a time series and to remove the unit root. the cointegration should be able to serve the same purpose. | How to transform a unit root process to a stationary process?
taking the difference of the time series is NOT the only way to detrend a time series and to remove the unit root. the cointegration should be able to serve the same purpose. |
55,898 | When to use Bernoulli Naive Bayes? | Bernoulli Naive Bayes is for binary features only. Similarly, multinomial naive Bayes treats features as event probabilities. Your example is given for nonbinary real-valued features $(x,y)$, which do not exclusively lie in the interval $[0,1]$, so the models do not apply to your features.
A typical example (taken from the wiki page) for either Bernoulli or multinomial NB is document classification, where the features represent the presence of a term (in the Bernoulli case) or the probability of a term (in the multinomial case). | When to use Bernoulli Naive Bayes? | Bernoulli Naive Bayes is for binary features only. Similarly, multinomial naive Bayes treats features as event probabilities. Your example is given for nonbinary real-valued features $(x,y)$, which do | When to use Bernoulli Naive Bayes?
Bernoulli Naive Bayes is for binary features only. Similarly, multinomial naive Bayes treats features as event probabilities. Your example is given for nonbinary real-valued features $(x,y)$, which do not exclusively lie in the interval $[0,1]$, so the models do not apply to your features.
A typical example (taken from the wiki page) for either Bernoulli or multinomial NB is document classification, where the features represent the presence of a term (in the Bernoulli case) or the probability of a term (in the multinomial case). | When to use Bernoulli Naive Bayes?
Bernoulli Naive Bayes is for binary features only. Similarly, multinomial naive Bayes treats features as event probabilities. Your example is given for nonbinary real-valued features $(x,y)$, which do |
55,899 | When to use Bernoulli Naive Bayes? | Bernoulli is well to use for discontinuous variation, most especially features as binary format e.g iris, fingerprint, blood etc. while Gaussian is for normal distribution or continuous variation the likes of age, height, size etc. | When to use Bernoulli Naive Bayes? | Bernoulli is well to use for discontinuous variation, most especially features as binary format e.g iris, fingerprint, blood etc. while Gaussian is for normal distribution or continuous variation the | When to use Bernoulli Naive Bayes?
Bernoulli is well to use for discontinuous variation, most especially features as binary format e.g iris, fingerprint, blood etc. while Gaussian is for normal distribution or continuous variation the likes of age, height, size etc. | When to use Bernoulli Naive Bayes?
Bernoulli is well to use for discontinuous variation, most especially features as binary format e.g iris, fingerprint, blood etc. while Gaussian is for normal distribution or continuous variation the |
55,900 | How to model nested fixed-factor with GLMM | Stream of consciousness:
you might want to consider log-transforming the response (provided there are no zeros) rather than using the log link, i.e. lmer(log(WaterChlA) ~ ...) rather than glmer(WaterChla ~ ..., family=gaussian(link="log")); I say this because log-transforming can take care of heteroscedasticity in the response (specifically a standard deviation approximately proportional to the expected mean value), which the log-link approach doesn't do (also, lmer tends to be a little bit faster and more stable than glmer)
Nested fixed effects are indeed hard to specify in linear models (as opposed to ANOVA frameworks), where the underlying framework is explicitly trying to estimate parameters rather than just evaluate sums of squares/proportions of variance explained. If the parameters are not uniquely identifiable, then the model will end up dropping some terms. If your experimental design doesn't allow it, you simply can't estimate an interaction, which is necessary for nested terms. For example, you can't tell how the effect of Compo=AB varies among species richnesses, because Compo=AB only exists when species richness is 2. You have a further problem: you can't even estimate the effect of species richness uniquely when Compo is in the model, because Compo is redundant with species richness (once you know the Compo, you know the species richness). Such redundancy is allowed in variance-decomposition approaches, or in Bayesian models, but not in linear models based on parameter estimation and models descended from them. You could make Compo a random effect (also a good modeling strategy because it has many levels, which will be expensive in terms of degrees of freedom); that would still allow you to quantify the effects.
Since you have 480 data points, you should keep in mind the rule of thumb (e.g. from Harrell's Regression Modeling Strategies) that you should not try to fit a model with more than at most n/10 = 48 parameters (that's extreme - typically the rule is stated as n/20, which would mean 24 parameters). Because your predictors are all factors, trying to estimate interactions among them will rapidly make your model size blow up.
lme4 is known to give false-positive convergence warnings in some cases, but they look real in this case; I think the problem is that you have a model that is way overspecified/too complex ... see if stripping it down (as suggested below) helps.
In general you shouldn't include a categorical variable (factor) as both a fixed effect and a random-effect grouping variable: that's a redundant model specification.
I would say a reasonable start for this model would be
lmer(log(WaterChlA) ~ Day*SPrich + (1|Compo) + (1|ExpRun/TankNo),
data = Wetland, na.action = na.exclude)
although trying to estimate the interaction between Day and SPrich gives you 20 parameters, which is pushing it a bit. You might consider whether you're willing to convert one or both of those factors to numeric (i.e. only looking for linear trends, or if you used poly(Day,SPrich,degree=2) that would give you quadratic terms with only 5 parameters ...) | How to model nested fixed-factor with GLMM | Stream of consciousness:
you might want to consider log-transforming the response (provided there are no zeros) rather than using the log link, i.e. lmer(log(WaterChlA) ~ ...) rather than glmer(Water | How to model nested fixed-factor with GLMM
Stream of consciousness:
you might want to consider log-transforming the response (provided there are no zeros) rather than using the log link, i.e. lmer(log(WaterChlA) ~ ...) rather than glmer(WaterChla ~ ..., family=gaussian(link="log")); I say this because log-transforming can take care of heteroscedasticity in the response (specifically a standard deviation approximately proportional to the expected mean value), which the log-link approach doesn't do (also, lmer tends to be a little bit faster and more stable than glmer)
Nested fixed effects are indeed hard to specify in linear models (as opposed to ANOVA frameworks), where the underlying framework is explicitly trying to estimate parameters rather than just evaluate sums of squares/proportions of variance explained. If the parameters are not uniquely identifiable, then the model will end up dropping some terms. If your experimental design doesn't allow it, you simply can't estimate an interaction, which is necessary for nested terms. For example, you can't tell how the effect of Compo=AB varies among species richnesses, because Compo=AB only exists when species richness is 2. You have a further problem: you can't even estimate the effect of species richness uniquely when Compo is in the model, because Compo is redundant with species richness (once you know the Compo, you know the species richness). Such redundancy is allowed in variance-decomposition approaches, or in Bayesian models, but not in linear models based on parameter estimation and models descended from them. You could make Compo a random effect (also a good modeling strategy because it has many levels, which will be expensive in terms of degrees of freedom); that would still allow you to quantify the effects.
Since you have 480 data points, you should keep in mind the rule of thumb (e.g. from Harrell's Regression Modeling Strategies) that you should not try to fit a model with more than at most n/10 = 48 parameters (that's extreme - typically the rule is stated as n/20, which would mean 24 parameters). Because your predictors are all factors, trying to estimate interactions among them will rapidly make your model size blow up.
lme4 is known to give false-positive convergence warnings in some cases, but they look real in this case; I think the problem is that you have a model that is way overspecified/too complex ... see if stripping it down (as suggested below) helps.
In general you shouldn't include a categorical variable (factor) as both a fixed effect and a random-effect grouping variable: that's a redundant model specification.
I would say a reasonable start for this model would be
lmer(log(WaterChlA) ~ Day*SPrich + (1|Compo) + (1|ExpRun/TankNo),
data = Wetland, na.action = na.exclude)
although trying to estimate the interaction between Day and SPrich gives you 20 parameters, which is pushing it a bit. You might consider whether you're willing to convert one or both of those factors to numeric (i.e. only looking for linear trends, or if you used poly(Day,SPrich,degree=2) that would give you quadratic terms with only 5 parameters ...) | How to model nested fixed-factor with GLMM
Stream of consciousness:
you might want to consider log-transforming the response (provided there are no zeros) rather than using the log link, i.e. lmer(log(WaterChlA) ~ ...) rather than glmer(Water |