Id
stringlengths 1
6
| PostTypeId
stringclasses 7
values | AcceptedAnswerId
stringlengths 1
6
⌀ | ParentId
stringlengths 1
6
⌀ | Score
stringlengths 1
4
| ViewCount
stringlengths 1
7
⌀ | Body
stringlengths 0
38.7k
| Title
stringlengths 15
150
⌀ | ContentLicense
stringclasses 3
values | FavoriteCount
stringclasses 3
values | CreationDate
stringlengths 23
23
| LastActivityDate
stringlengths 23
23
| LastEditDate
stringlengths 23
23
⌀ | LastEditorUserId
stringlengths 1
6
⌀ | OwnerUserId
stringlengths 1
6
⌀ | Tags
sequence |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
617645 | 1 | 617655 | null | 0 | 10 | I am new to SEM. My aim is to determine which factors (several intrisic and serveral extrinsic) influence school participation in children with disabilities. Some of my latent variables are scores to quetionnaires, which have a lot of items (around 70 - 80). These items are divided into subscales.
- I wanted to know if I can use subscales scores as "observed" variables.
The answer appear to be yes. However, I never saw that in a paper. Do you have any references on that?
- I also read that there should be at list 3 observed variables for a latent varaibles. Are there references for that as well ? and references stating that using only two can be ok?
- I fear that my model will be to much complex and won't converge. If is is the case, is it appropriate to run 2 models (1 for intrisc and the other one for extrinsic faactors). What king of problem this approach could pose?
Any answer and comment on these questions would be much appreciated !
Thanks !
| Can we use "subscale scores" as "observed variables" in SEM ? + dividing a model? | CC BY-SA 4.0 | null | 2023-06-02T08:27:58.863 | 2023-06-02T10:25:06.203 | null | null | 286497 | [
"structural-equation-modeling",
"latent-variable",
"scales"
] |
617646 | 1 | null | null | 1 | 18 | I was reading [Mader et al. 2023](https://www.pnas.org/doi/10.1073/pnas.2212154120), where they model the effect of a variable (neuroticism) on both the outcome's (negative emotion) mean and variance. I noticed that the way the neuroticism score is obtained is by taking the mean of the scale items. Whether or not that is a reasonable choice here, I am aware of objections against ignoring the ordinal nature of measurements (e.g. [Liddell and Kruschke 2017](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2692323)).
I also know that most modern SEM tools support modelling ordinal responses (I'm familiar with lavaan and Stata's GSEM). However, I don't know of tools which also support placing predictors on the variance of endogenous variables.
So far I have been able to find approaches developed in the context of measurement invariance (see LSEM and MFA in [Hildebrandt et al. 2016](https://www.tandfonline.com/doi/full/10.1080/00273171.2016.1142856)) and I know that scale-location transformations that apply uniformly over all measures of a given factor are statistically indistinguishable from scale-location transformations of the factor itself so I believe that modelling such dependencies should be possible at least in principle.
My question is simply whether there is any SEM software out there that supports simultaneous estimation of the usual SEM parameters with added parametric dependencies in the variances.
| SEM: parametric modelling of latent variances | CC BY-SA 4.0 | null | 2023-06-02T08:31:33.063 | 2023-06-02T08:32:46.760 | 2023-06-02T08:32:46.760 | 389316 | 389316 | [
"regression",
"heteroscedasticity",
"structural-equation-modeling"
] |
617647 | 2 | null | 617606 | 0 | null | The closest mathematical problem to your question is to prove that two sets of random variables are statistically independent (in the context of a machine learning classifier those sets would be (i) the outcome, call it y, and (ii) the features, call them X). I guess you cannot solve the problem in all generality, but you can if you know/assume something regarding the distribution of the variables, and you can do that numerically to a certain level of confidence.
You basically have to prove that:
P(X,y)=P(X)*P(y)
i.e., that the joint probability distribution of the two random variables can be factored into the product of their individual probability distributions.
This for an academic answer. In reality, one of the advantage of a machine learning approach is that you do not need to know the probability distribution of your variables, and typically you do not know it/you are not very interested in it - just if you can find a predictive relationship between the two sets, that generalizes to unseen data (of the same type). So there is no way to prove it definitely, that there won't be an algorithm you have not tried yet that would be able to perform classification above chance (or find any other predictive relationship between the features and the outcome). You can just try new algorithms until you are tired/hopeless.
| null | CC BY-SA 4.0 | null | 2023-06-02T08:36:47.673 | 2023-06-02T08:36:47.673 | null | null | 181921 | null |
617648 | 1 | null | null | 1 | 17 | In Bayesian linear regression, if we want to get confidence intervals for predictions of a new observation. I was thinking of the following two options.
- Use the quantiles from samples sampled from the coefficents in the posterior $\beta | \mathbf{y}; \mathbf{X}$
- Use the predictive posterior distribution, $y^* \, | \beta, \mathbf{y}; \mathbf{x}^*, \mathbf{X}$
Where $\mathbf{x}^*$ is a new observation. What are the advantages of 2. over 1.
| In Bayesian linear regression Advantages of predictive posterior compared to posterior of model coefficients | CC BY-SA 4.0 | null | 2023-06-02T09:12:58.423 | 2023-06-02T09:12:58.423 | null | null | 283493 | [
"bayesian",
"inference",
"markov-chain-montecarlo",
"stan"
] |
617649 | 1 | null | null | -1 | 20 | Which method to use and how to compute a correction factor from 24 independent variables?
In my experiment the 24 values are between a range [1.6,2.4], and I suppose the normality equal to 2.
Example:
```
2.049035,1.838166,1.932433,1.996739,1.905809,1.993102,1.980901,2.081076,1.958306,1.988068,1.834962,1.983945,2.437743,1.835268,2.110528,2.088951,1.819848,2.201060,1.680312,1.734921,2.237787,2.226321,2.164516,1.873069.
```
First I compute the median from the 24 values. And then I decide to apply a correction factor = 2-median. Is this correct?
| Best (or accepted) correction factor | CC BY-SA 4.0 | null | 2023-06-02T09:29:27.777 | 2023-06-03T07:40:57.507 | 2023-06-03T07:40:57.507 | 121522 | 389410 | [
"regression",
"orthogonal"
] |
617650 | 1 | null | null | 0 | 47 | I want to use truncated Cauchy distribution as my prior. $Ca^+(x; 0, b)$ is the truncated Cauchy distribution with pdf $$f(x|b)=\frac{2}{\pi}\times \frac{1}{b[1+(x/b)^2]}I_{[x>0]}, b>0$$
But there is no Cauchy distribution in JAGS. I have found that dscaled.gamma(s, df) in the glm module maybe can solve my problem. However, I can not find the pdf of scaled gamma distribution. Hence, I don't know how to set the parameter. Is there any simple way to give Cauchy distribution as prior in JAGS?
| JAGS: How can I apply truncated Cauchy distribution im prior? | CC BY-SA 4.0 | null | 2023-06-02T09:43:19.197 | 2023-06-03T04:26:39.070 | null | null | 350153 | [
"bayesian"
] |
617651 | 2 | null | 617601 | 1 | null | Not a full solution, but a few hints to help you on your way:
$\log|\Sigma|$ =$2\log|D|+log|A|$
$\Sigma^{-1}=D^{-1}A^{-1}D^{-1}$
$A^{-1}=\frac{1}{1-\rho}(I-\frac{\rho}{1+(k-1)\rho}jj^\intercal)$
$x^\intercal D^{-1} j j^\intercal D^{-1} x = (j^\intercal D^{-1} x)^2 =( \sum_a \sigma_a^{-1}x_a)^2$
| null | CC BY-SA 4.0 | null | 2023-06-02T09:54:35.173 | 2023-06-02T09:54:35.173 | null | null | 319175 | null |
617652 | 2 | null | 617637 | 3 | null | what you are missing is that only the variance sums for uncorrelated random variables, but for 100% correlated variables the standard deviation sums.
so if I have n common signals (standard deviation $\tau$) distorted by independent noise (0 mean, standard deviation $\sigma$), $s+\epsilon_i$, then the signal to noise ratio of the sum is $\frac{n^2 \tau^2}{ n \sigma^2}=\frac{n \tau^2}{ \sigma^2}$. ie the signal to noise ratio is $n$ times greater than that of a single distorted signal.
| null | CC BY-SA 4.0 | null | 2023-06-02T09:55:47.803 | 2023-06-02T14:15:46.407 | 2023-06-02T14:15:46.407 | 27556 | 27556 | null |
617653 | 1 | null | null | 1 | 30 | Let $X \sim N_n(\mu, \Sigma)$, such that $AX=b$ where $A$ is a ($p \times n$) matrix, with $p \ll n$. How can I efficiently sample from this distribution?
I've seen techniques using elliptical slice sampling, but that seems to work only if the constraint is an inequality.
I realize I would be sampling from a $n-p$ subspace, that is fine.
| How can I sample a multivariate normal vector that satisfies a linear equality constraint? | CC BY-SA 4.0 | null | 2023-06-02T09:57:03.640 | 2023-06-02T09:57:03.640 | null | null | 387957 | [
"sampling",
"markov-chain-montecarlo",
"multivariate-normal-distribution"
] |
617654 | 1 | null | null | 0 | 35 | In a recent update from OpenAI, they mentioned the discovery of N neurons in their GPT-2 model. This finding raises the question: how did they arrive at this calculation? In their publication, they did not explicitly mention the methodology behind determining the number of neurons.
Upon further investigation, I came across a set of formulas that are commonly used to estimate the number of neurons and parameters in transformer-based models like GPT-2. These formulas, although not specifically mentioned by OpenAI, provide a good starting point for understanding the calculation.
The formulas are as follows:
$$\text{Neurons}= H * A * L$$
$$\text{Parameters} = A * (H^2 / A) * L$$
Here, "H" represents the hidden size, "A" refers to the number of attention heads in the model, and "L" denotes the number of layers in the model.
It is important to note that these formulas make certain assumptions about the model architecture. They assume that the number of neurons in each attention head and layer is constant, which might not be the case in practice. Additionally, they do not account for other architectural elements specific to transformer models, such as positional encodings and feed-forward networks.
While the provided formulas offer a general approach for estimating the number of neurons, it is essential to consider the specific model architecture and consult the actual implementation details for more accurate calculations.
Considering this, can anyone shed more light on how OpenAI arrived at the calculation of N neurons in their GPT-2 model? Has OpenAI explicitly mentioned their methodology or provided additional insights into the process? Any further clarification on this topic would be greatly appreciated.
Thank you in advance for your input and expertise!
| Calculation of Neurons in GPT-2: Understanding the Methodology | CC BY-SA 4.0 | null | 2023-06-02T10:15:07.930 | 2023-06-02T14:07:09.123 | null | null | 389412 | [
"gpt"
] |
617655 | 2 | null | 617645 | 0 | null |
- You can use subscale scores or item parcels (see the literature on item parceling in SEM) as indicators. However, if the subscales measure distinct, only modestly related factors (i.e., if the overall scale is multidimensional), then your single factor resulting from that strategy may be difficult to interpret, the standardized factor loadings may not be very strong, and the error terms may contain systematic scale-specific variance in addition to measurement error variance.
- This recommendation is given in many SEM textbooks. The main reason is probably that a single-factor model with just two indicators and unequal loadings is underidentified unless there are other variables in the model with which the factor is substantially correlated. In other words, a 2-indicator factor can be OK as long as the factor is correlated with at least one other variable in your model ("variable" here could refer to either an observed variable such as age or gender or another latent factor). A model with a single factor, 3 indicators, and unequal loadings is identified "per se" (as long as the indicators of this factor have substantial positive covariances). Also, models with just two indicators per factor appear to be more prone to Heywood cases (improper solutions). That being said, there are also many situations in which models with just 2 indicators per factor work just fine.
- You could do that. One downside maybe that with separate models, you could not examine all predictors in a single model and thus not fully study their potential redundancies and/or interactions.
| null | CC BY-SA 4.0 | null | 2023-06-02T10:25:06.203 | 2023-06-02T10:25:06.203 | null | null | 388334 | null |
617656 | 2 | null | 617654 | 0 | null |
### Total Neurons Formula
The formula for Total Neurons represents the total number of "neurons" in GPT-2 XL. Each "neuron" corresponds to an individual unit or processing element within the model. The formula is given by:
$$\text{Total Neurons} = L * 5H$$
- L: Number of transformer layers.
- H: Hidden size of the transformer layers.
Let's break down the technical steps:
- Each transformer layer has a hidden size denoted by H. This hidden size represents the number of dimensions in the hidden state of the transformer layer.
- In each transformer layer, the feed-forward network is applied independently to each position. The feed-forward network has an input size of H and an output size of F. In GPT-2, the output size F is chosen to be 4 times the hidden size H.
- Now, let's consider the number of neurons in each transformer layer. In a transformer layer, the total number of neurons is the sum of the number of neurons in the self-attention mechanisms and the number of neurons in the feed-forward network.
- In the self-attention mechanisms, the number of neurons can be approximated as H, which is the hidden size of the transformer layer.
- In the feed-forward network, the number of neurons can be approximated as F, which is the output size of the feed-forward network. Since F is chosen to be 4 times H, we have F = 4H.
Considering these approximations, the total number of neurons in each transformer layer can be calculated as H + F = H + 4H = 5H.
- Finally, to get the total number of neurons in the entire model, we multiply the number of neurons in each layer (5H) by the number of transformer layers (L), resulting in the formula L * 5H.
---
For example, if we consider H = 1280 and L = 48, we can calculate the total number of neurons as follows:
Total Neurons = 48 * 5 * 1280 = [307,200 neurons](https://openaipublic.blob.core.windows.net/neuron-explainer/paper/index.html#sec-interesting-neurons)
This means that GPT-2 XL, with 48 transformer layers and a hidden size of 1280, has a total of 307,200 "neurons".
### Total Parameters Formula
The formula for Total Parameters represents the total number of trainable parameters in GPT-2 XL. Parameters are the learnable variables in the model that are adjusted during the training process. The formula is given by:
$$\text{Total Parameters} = 7 * H^2 * L + 4 * H * L$$
- L: Number of transformer layers.
- H: Hidden size of the transformer layers.
The formula consists of two terms:
- The term 7 * H^2 * L represents the number of parameters associated with the self-attention mechanisms. It considers the hidden size of the transformer layers (H) squared and multiplies it by 7 to account for the query, key, and value linear transformations. The result is then multiplied by the number of transformer layers (L).
- The term 4 * H * L represents the number of parameters associated with the feed-forward networks. It considers the hidden size of the transformer layers (H) multiplied by 4, which is derived from the assumption that the hidden size of the feed-forward networks is 4 times the hidden size of the transformer layers. The result is then multiplied by the number of transformer layers (L).
---
For example, if we consider H = 1280 and L = 48, we can calculate the total number of parameters as follows:
Total Parameters = 7 * 1280^2 * 48 + 4 * 1280 * 48 = 1,591,101,440
This means that GPT-2 XL, with a hidden size of 1280 and 48 transformer layers, has a total of 1,591,101,440 trainable parameters.
These formulas provide insights into the complexity and capacity of GPT-2 XL, quantifying the number of "neurons" and trainable parameters based on the given values of H and L.
| null | CC BY-SA 4.0 | null | 2023-06-02T10:26:34.567 | 2023-06-02T14:07:09.123 | 2023-06-02T14:07:09.123 | 389412 | 389412 | null |
617657 | 1 | null | null | 0 | 3 | I decided to train GCN on the Cora dataset for the node classification task, however, with the random labels, i.e., applying `np.random.shuffle(labels)`. For the default set of parameters, I am getting an accuracy of around 0.3 for the test set and 0.4 for the train set. I expect that for the random labels, the accuracy would be `1/number of classes`. So in the case of Cora: `1/7 = 0.14`.
Do you have any intuition why graph neural networks perform better than the random case? I am aware that in [1] authors trained the models on the random labels and achieved perfect results on the train set. However, for test size they still were around the `1/number of classes`.
I checked simpler models such as RandomForests or SVC and the final accuracy for the test size is indeed 1/7.
[1] Zhang, C., Bengio, S., Hardt, M., Recht, B., & Vinyals, O. (2021). Understanding deep learning (still) requires rethinking generalization. Communications of the ACM, 64(3), 107-115.
| Node classification with random labels for GNNs | CC BY-SA 4.0 | null | 2023-06-02T10:43:21.657 | 2023-06-02T10:43:21.657 | null | null | 216281 | [
"classification",
"labeling",
"graph-neural-network"
] |
617658 | 2 | null | 100159 | 0 | null | (1) and (2) are attempts to summarize the distribution of the estimates over repeating the cross-validation, but as @cbeleites unhappy with SX mentioned, this is not a meaningful interval for the true AUC. As mentioned by @user44764, your answer (3) is wrong as it tacitly assumes independence of AUC values across folds, which is wrong. It would only be correct if you had several AUC estimates of independent test datasets, and even then only apply to the specific training dataset, not to the AUC over all possible training datasets. To estimate the latter, you would need several sets of training and test datasets for which to calculate AUC, and then find the variance between them, which is rare. Instead, cross-validation is commonly used to estimate this latter AUC.
LeDell et al. (2015) provide an attractive method to find the confidence interval for the AUC, with R implementation: [Computationally efficient confidence intervals for cross-
validated area under the ROC curve estimates](https://10.1214/15-EJS1035).
| null | CC BY-SA 4.0 | null | 2023-06-02T10:50:39.687 | 2023-06-02T10:50:39.687 | null | null | 98942 | null |
617659 | 2 | null | 617614 | 8 | null |
## Detailed explanation of the problem:
In the case of X being near-singular (high colinearity/covariance between features), different issues where coming both from `scipy.linalg.lstsq()` and `sklearn.linear_model.LinearRegession()`
Source of error 1: As @SextusEmpiricus explained, the matrix being near-singular leads to rounding errors that impact enormously the final predictions. In this sense, `scipy.linalg.lstsq()` is silently failing WITHOUT raising any warning or error.
Source of error 2: The matrix coming from pandas was F-contiguous. `sklearn` converts it to C-contiguous before calling `scipy.linalg.lstsq()` and then use the `predict()` by using a matrix multiplication right from the F-contiguous array. This lead to another layer of rounding errors. I opened [another question here on Stack Overflow](https://stackoverflow.com/questions/76388886/python-rounding-errors-between-c-contiguous-and-f-contiguous-arrays-for-matrix)
Source of error 3: The first thing that `LinearRegression()` is doing is to center the dataframe. This goes badly in my case, I still struggle to understand why exactly.
Note: Please note that these rounding errors also depends on CPUs and hardware, which makes it even hard to achieve reproducibility.
---
## (Partial) Work-Around:
To work around the `sklearn` problems, one can:
- Ensure input matrix/array are C-contiguous
- Stop rely on LinearRegression's fit_intercept=True but instead center data manually first:
```
for seed in range(1000):
np.random.seed(seed)
s = pd.Series(np.random.normal(10, 1, size=1_000))
l_com = np.arange(100)
df_Xy = pd.concat([s.ewm(com=com).mean() for com in l_com], axis=1)
df_Xy['y'] = s.shift(-1)
df_Xy.dropna(inplace=True)
X = np.ascontiguousarray(df_Xy[l_com].values)
y = np.ascontiguousarray(df_Xy.y.values)
X_offset = X.mean(axis=0)
y_offset = y.mean()
X_centered = X - X_offset
y_centered = y - y_offset
model = LinearRegression(fit_intercept=False) # We don't rely on sklearn fit_intercept anymore
model.fit(X_centered, y_centered)
assert model.score(X_centered, y_centered) > 0 # ALL GOOD
```
---
## Moving forward / Long-term Solution:
- I opened an issue in scipy Github to raise a Warning in scipy.linalg.lstsq when the X matrix is near-singular.
- I opened an issue in sklearn project on Github, about inconsistency between C-cont vs F-cont arrays
| null | CC BY-SA 4.0 | null | 2023-06-02T10:52:02.820 | 2023-06-03T09:45:21.837 | 2023-06-03T09:45:21.837 | -1 | 99438 | null |
617660 | 1 | null | null | 0 | 7 | I am trying to understand the implications of an IRF. Specifically in a VAR system.
Here is documentation I looked at:
[https://www.statsmodels.org/stable/vector_ar.html#impulse-response-analysis](https://www.statsmodels.org/stable/vector_ar.html#impulse-response-analysis)
[https://www.r-econometrics.com/timeseries/irf/](https://www.r-econometrics.com/timeseries/irf/)
[https://towardsdatascience.com/multivariate-autoregressive-models-and-impulse-response-analysis-cb5ead9b2b68](https://towardsdatascience.com/multivariate-autoregressive-models-and-impulse-response-analysis-cb5ead9b2b68)
What I understand, their main purpose is to describe the evolution of a model’s variables in reaction to a one-unit shock in one variable an its effect on another variable while keeping all other variables constant. -(This last part in worrisome)
What I question:
In a VAR system, when all variables are endogenous, they all influence each-other.
Imagine you have a VAR system with 12 endogenous variables.
A one-unit shock in one variable will have an effect on each of these 12 variables.
Is this aggregated effect looked at when looking at an IRF between two variables?
So imagine I want to look at the shock of Var1 -> Var2.
Which would mean a one-unit shock in Var1, while keeping all other variables constant.
However, a one-unit shock in Var1 - also has effects on the forecasted values of Var3 to Var12, and that change in forecasted values - have an effect on Var2.
So it would be a naive approach to look at the isolated effect of Var1 on Var2 while keeping all other variables constant, because that assumes that the change in Var1 does not cause a change in Var3 - Var12, which it does, and which as an impact on Var2.
So my question is, in IRF:
- Does it quantify isolated response of a one-unit shock of Var1 -> Var2 / while all other variables don't change
- Does it quantify the response of a one-unit shock in Var1, with all of its implications in all other variables - and look at the shock effect it has on Var2
| Impulse Response in a VAR model - all endogenous variables | CC BY-SA 4.0 | null | 2023-06-02T10:52:30.987 | 2023-06-02T10:52:30.987 | null | null | 246234 | [
"vector-autoregression",
"impulse-response"
] |
617661 | 1 | null | null | 0 | 13 | I am wondering if it is possible to make ROC-AUC curves for GEE models? I found few papers who did that and it wasn't clear for me. I thought it was impossible given how they are marginal models. Would someone give me their opinion and how to proceed about producing a curve if possible?
| ROC-AUC in GEE models? | CC BY-SA 4.0 | null | 2023-06-02T10:58:56.147 | 2023-06-02T10:58:56.147 | null | null | 388039 | [
"machine-learning",
"mathematical-statistics"
] |
617662 | 1 | null | null | 0 | 18 | I am currently working on a very imbalanced dataset:
- 24 million transactions (rows of data)
- 30,000 fraudulent transactions
(0.1% of total transactions)
and I am using XGBoost as the model to predict whether a transaction is fraudulent or not. After tuning some hyperparameters via optuna, I have received such results
F1 Score on Training Data : 0.57417479049085
F1 Score on Testing Data : 0.8719438392641008
PR AUC score on Training Data : 0.9918559271777408
PR AUC score on Testing Data : 0.9077624174590952
```
Training report
precision recall f1-score support
0 1.00 1.00 1.00 20579668
1 0.47 1.00 0.64 25179
accuracy 1.00 20604847
macro avg 0.73 1.00 0.82 20604847
weighted avg 1.00 1.00 1.00 20604847
Test report
precision recall f1-score support
0 1.00 1.00 1.00 2058351
1 0.83 0.93 0.87 2087
accuracy 1.00 2060438
macro avg 0.91 0.96 0.94 2060438
weighted avg 1.00 1.00 1.00 2060438
```
The following is my loss, learning curves and classification matrix
Loss data validation_0 is the trainingset, validation_1 is the testing set
```
[0] validation_0-aucpr:0.75831 validation_0-logloss:0.67418 validation_1-aucpr:0.17989 validation_1-logloss:0.67417
[10] validation_0-aucpr:0.78157 validation_0-logloss:0.52305 validation_1-aucpr:0.42574 validation_1-logloss:0.51965
[20] validation_0-aucpr:0.83228 validation_0-logloss:0.41181 validation_1-aucpr:0.79299 validation_1-logloss:0.40593
[30] validation_0-aucpr:0.84335 validation_0-logloss:0.32956 validation_1-aucpr:0.82845 validation_1-logloss:0.32171
[40] validation_0-aucpr:0.86026 validation_0-logloss:0.26683 validation_1-aucpr:0.86401 validation_1-logloss:0.25788
[50] validation_0-aucpr:0.87519 validation_0-logloss:0.21770 validation_1-aucpr:0.86298 validation_1-logloss:0.20919
[60] validation_0-aucpr:0.88714 validation_0-logloss:0.17906 validation_1-aucpr:0.86130 validation_1-logloss:0.17034
[70] validation_0-aucpr:0.89531 validation_0-logloss:0.14839 validation_1-aucpr:0.86285 validation_1-logloss:0.14016
[80] validation_0-aucpr:0.89770 validation_0-logloss:0.12463 validation_1-aucpr:0.86329 validation_1-logloss:0.11545
[90] validation_0-aucpr:0.90004 validation_0-logloss:0.10519 validation_1-aucpr:0.86052 validation_1-logloss:0.09647
[100] validation_0-aucpr:0.90534 validation_0-logloss:0.08897 validation_1-aucpr:0.87044 validation_1-logloss:0.07986
[110] validation_0-aucpr:0.91044 validation_0-logloss:0.07617 validation_1-aucpr:0.86994 validation_1-logloss:0.06662
[120] validation_0-aucpr:0.91458 validation_0-logloss:0.06538 validation_1-aucpr:0.86962 validation_1-logloss:0.05589
[130] validation_0-aucpr:0.91902 validation_0-logloss:0.05645 validation_1-aucpr:0.87092 validation_1-logloss:0.04684
[140] validation_0-aucpr:0.92276 validation_0-logloss:0.04895 validation_1-aucpr:0.87258 validation_1-logloss:0.03967
[150] validation_0-aucpr:0.92713 validation_0-logloss:0.04308 validation_1-aucpr:0.87285 validation_1-logloss:0.03377
[160] validation_0-aucpr:0.93179 validation_0-logloss:0.03788 validation_1-aucpr:0.87703 validation_1-logloss:0.02851
[170] validation_0-aucpr:0.93487 validation_0-logloss:0.03361 validation_1-aucpr:0.87967 validation_1-logloss:0.02426
[180] validation_0-aucpr:0.93875 validation_0-logloss:0.03013 validation_1-aucpr:0.88027 validation_1-logloss:0.02093
[190] validation_0-aucpr:0.94333 validation_0-logloss:0.02688 validation_1-aucpr:0.88284 validation_1-logloss:0.01781
[200] validation_0-aucpr:0.94592 validation_0-logloss:0.02454 validation_1-aucpr:0.88497 validation_1-logloss:0.01577
[210] validation_0-aucpr:0.95043 validation_0-logloss:0.02236 validation_1-aucpr:0.89025 validation_1-logloss:0.01363
[220] validation_0-aucpr:0.95464 validation_0-logloss:0.02033 validation_1-aucpr:0.89146 validation_1-logloss:0.01172
[230] validation_0-aucpr:0.95761 validation_0-logloss:0.01880 validation_1-aucpr:0.89327 validation_1-logloss:0.01044
[240] validation_0-aucpr:0.96080 validation_0-logloss:0.01747 validation_1-aucpr:0.89531 validation_1-logloss:0.00912
[250] validation_0-aucpr:0.96417 validation_0-logloss:0.01625 validation_1-aucpr:0.89891 validation_1-logloss:0.00802
[260] validation_0-aucpr:0.96675 validation_0-logloss:0.01519 validation_1-aucpr:0.90279 validation_1-logloss:0.00712
[270] validation_0-aucpr:0.96898 validation_0-logloss:0.01434 validation_1-aucpr:0.90530 validation_1-logloss:0.00645
[280] validation_0-aucpr:0.97143 validation_0-logloss:0.01353 validation_1-aucpr:0.90629 validation_1-logloss:0.00573
[290] validation_0-aucpr:0.97334 validation_0-logloss:0.01284 validation_1-aucpr:0.90836 validation_1-logloss:0.00520
[300] validation_0-aucpr:0.97506 validation_0-logloss:0.01216 validation_1-aucpr:0.90954 validation_1-logloss:0.00468
[310] validation_0-aucpr:0.97660 validation_0-logloss:0.01161 validation_1-aucpr:0.91150 validation_1-logloss:0.00427
[320] validation_0-aucpr:0.97800 validation_0-logloss:0.01108 validation_1-aucpr:0.91411 validation_1-logloss:0.00386
[330] validation_0-aucpr:0.97927 validation_0-logloss:0.01068 validation_1-aucpr:0.91551 validation_1-logloss:0.00361
[340] validation_0-aucpr:0.98054 validation_0-logloss:0.01019 validation_1-aucpr:0.91600 validation_1-logloss:0.00323
[350] validation_0-aucpr:0.98177 validation_0-logloss:0.00977 validation_1-aucpr:0.91776 validation_1-logloss:0.00299
[360] validation_0-aucpr:0.98272 validation_0-logloss:0.00938 validation_1-aucpr:0.92028 validation_1-logloss:0.00275
[370] validation_0-aucpr:0.98370 validation_0-logloss:0.00903 validation_1-aucpr:0.92015 validation_1-logloss:0.00256
[380] validation_0-aucpr:0.98444 validation_0-logloss:0.00877 validation_1-aucpr:0.92196 validation_1-logloss:0.00242
[390] validation_0-aucpr:0.98514 validation_0-logloss:0.00851 validation_1-aucpr:0.92389 validation_1-logloss:0.00229
[400] validation_0-aucpr:0.98580 validation_0-logloss:0.00828 validation_1-aucpr:0.92348 validation_1-logloss:0.00219
[410] validation_0-aucpr:0.98643 validation_0-logloss:0.00801 validation_1-aucpr:0.92514 validation_1-logloss:0.00203
[420] validation_0-aucpr:0.98711 validation_0-logloss:0.00774 validation_1-aucpr:0.92575 validation_1-logloss:0.00189
[430] validation_0-aucpr:0.98774 validation_0-logloss:0.00750 validation_1-aucpr:0.92427 validation_1-logloss:0.00177
[440] validation_0-aucpr:0.98832 validation_0-logloss:0.00725 validation_1-aucpr:0.92531 validation_1-logloss:0.00164
[450] validation_0-aucpr:0.98887 validation_0-logloss:0.00708 validation_1-aucpr:0.92623 validation_1-logloss:0.00160
[460] validation_0-aucpr:0.98931 validation_0-logloss:0.00690 validation_1-aucpr:0.92806 validation_1-logloss:0.00151
[470] validation_0-aucpr:0.98963 validation_0-logloss:0.00674 validation_1-aucpr:0.92860 validation_1-logloss:0.00146
[480] validation_0-aucpr:0.99005 validation_0-logloss:0.00656 validation_1-aucpr:0.92980 validation_1-logloss:0.00140
[490] validation_0-aucpr:0.99038 validation_0-logloss:0.00642 validation_1-aucpr:0.93051 validation_1-logloss:0.00135
[500] validation_0-aucpr:0.99077 validation_0-logloss:0.00628 validation_1-aucpr:0.93089 validation_1-logloss:0.00131
[510] validation_0-aucpr:0.99108 validation_0-logloss:0.00613 validation_1-aucpr:0.93270 validation_1-logloss:0.00126
[520] validation_0-aucpr:0.99138 validation_0-logloss:0.00601 validation_1-aucpr:0.93254 validation_1-logloss:0.00122
[530] validation_0-aucpr:0.99166 validation_0-logloss:0.00590 validation_1-aucpr:0.93199 validation_1-logloss:0.00119
[540] validation_0-aucpr:0.99197 validation_0-logloss:0.00577 validation_1-aucpr:0.93318 validation_1-logloss:0.00116
[550] validation_0-aucpr:0.99224 validation_0-logloss:0.00566 validation_1-aucpr:0.93408 validation_1-logloss:0.00112
[560] validation_0-aucpr:0.99250 validation_0-logloss:0.00554 validation_1-aucpr:0.93327 validation_1-logloss:0.00109
[570] validation_0-aucpr:0.99278 validation_0-logloss:0.00542 validation_1-aucpr:0.93397 validation_1-logloss:0.00106
[580] validation_0-aucpr:0.99300 validation_0-logloss:0.00530 validation_1-aucpr:0.93339 validation_1-logloss:0.00102
[590] validation_0-aucpr:0.99324 validation_0-logloss:0.00521 validation_1-aucpr:0.93372 validation_1-logloss:0.00100
[599] validation_0-aucpr:0.99338 validation_0-logloss:0.00513 validation_1-aucpr:0.93378 validation_1-logloss:0.00099
```
Confusion matrices of the trainig and testing sets
[](https://i.stack.imgur.com/XPQt1.png)
[](https://i.stack.imgur.com/XPJf3.png)
Learning curve
[](https://i.stack.imgur.com/M1atJ.png)
Although the PR AUC score are quite high, the F1 score of my trainig data is quite low and its PR AUC score is abnormally high.When intrepreting the loss and the learning curve, I see that the model is learning and generalizing well on the two sets (Although I have only included the testing sets here, results on the validation sets perform similarly). Is it safe to assume that my model is not overfitting, or is there something wrong with my intrepretation? I understand that overfitting means the data performs well on the training data, but can not generalize well into unseen data, and underfitting means that the model is unable to learn patterns from the training data, and their predictions suffer, but in this situation, it seems that my model is performing badly on the training data, but performs quite well on the testing sets. If it is not overfitting, what is wrong with my data, or my model, or are these results acceptable?
| Model returns low training F1 Score, but high Testing and Validation F1 score | CC BY-SA 4.0 | null | 2023-06-02T11:20:56.027 | 2023-06-02T12:22:59.923 | 2023-06-02T12:22:59.923 | 383080 | 383080 | [
"machine-learning",
"classification",
"unbalanced-classes",
"overfitting"
] |
617663 | 1 | null | null | 0 | 9 | Consider $X_n$ and $Y_n$ to random variable that are bounded in probability.
I know that
$$cov(X_n, Y_n) = O(n^{-1})$$
and that
$$(X_n, Y_n) \rightarrow_d (E_1, E_2)$$
where $E_1$ and $E_2$ are two independent random variables. By continuous mapping theorem, we know that
$$(f(X_n), f(Y_n)) \rightarrow_d (f(E_1), f(E_2))$$
for every continuous function $f$.
I am wondering of based on this I can prove that
$$cov(f(X_n), f(Y_n)) = O(n^{-1})$$
?
For the moment I am able to prove that it goes to $0$ but I am not able to keep the rate.
| Rate of convergence of covariance of functional | CC BY-SA 4.0 | null | 2023-06-02T11:42:06.367 | 2023-06-02T11:42:06.367 | null | null | 365245 | [
"covariance",
"density-function",
"convergence"
] |
617664 | 1 | null | null | 0 | 9 | I struggle to understand how batch normalization (BN) enables larger learning rates during gradient descent according to the [original paper](https://arxiv.org/pdf/1502.03167.pdf). I am aware that some of the explanations given in the latter have been [debunked](https://arxiv.org/pdf/1805.11604.pdf), but I would like to understand the logic behind them anyway.
The central claim is that BN has this effect on the learning rate because it prevents exploding gradients. I find the intuition behind this best explained in a [video](https://www.youtube.com/embed/Xogn6veSyxA?start=325&end=664&version=3) by Ian Goodfellow, where he uses the "simplest possible network" for illustration:
$\hat{y} = abcde$
so, a network that consists of 5 one-unit layers (where $a/b/c/d/e$ are the respective weights of the units), and which does not introduce non-linearity through activation functions. Obviously, during forward propagation, the value of $a$ will determine the statistics of the activation at $d$, as Goodfellow explains. Similarly, during backpropagation, the value of $d$ will influence the gradient of $a$ since the derivative w.r.t. $a$ is
$\frac{\delta \hat{y}}{\delta a} = bcde$
so far so good. Adding normalization steps before/after each layer prevents this interaction between layers and keeps the gradients from exploding (due to the normalized value range). This way, gradient descent can make large modifications to parameters, without having to adjust to the propagated effect of said modifications in later iterations, causing more linear progress and less oscillations. Am I correct so far?
Now, I have been trying to apply the same logic to the network shown in the below picture (taken from this [article](https://programmathically.com/understanding-the-exploding-and-vanishing-gradients-problem/)):
[](https://i.stack.imgur.com/pvPUn.png)
Here, the partial derivative of the cost function w.r.t. the weight $w_{1}$, is given by:
$\frac{\delta J}{\delta w_{1}} = \frac{\delta J}{\delta \hat{y}} \frac{\delta \hat{y}}{\delta z_{2}} \frac{\delta z_{2}}{\delta a_{1}} \frac{\delta a_{1}}{\delta z_{1}} \frac{\delta z_{1}}{\delta w_{1}}$
if the $ReLU$ is used as the activation function and considering that $z_{i} = w_{i}a_{i-1} + b_{i}$, this becomes (leaving out $\frac{\delta J}{\delta \hat{y}}$ for simplicity, and assuming that $z_{i} > 0$):
$\begin{align}
\frac{\delta J}{\delta w_{1}} &= \frac{\delta J}{\delta \hat{y}} \cdot ReLU'(z_{2}) \cdot w_{2} \cdot ReLU'(z_{1}) \cdot x_{1}\\
&= \frac{\delta J}{\delta \hat{y}} \cdot 1 \cdot w_{2} \cdot 1 \cdot x_{1}
\end{align}$
My problem is that in the original paper, BN is applied before the activation, so $BN(w_{i}a_{i-1} + b)$, i.e. $BN(z_{i})$. However, $ReLU'(z_{i})$ is always 1 or 0. And if a different activation is used, such as the sigmoid, then $\sigma'(z_{i})$ is always $\leq 1$. Point being, that I'm struggling to imagine how normalizing $z_{i}$ can make such a big difference, since the value range of $g(z_{i})$ is anyway very restricted for any activation function $g$. In the explanation by Goodfellow, the normalized values go into the multiplication unmodified, so it makes more sense to normalize them.
PS: I have asked a similar question about exploding gradients [before](https://stats.stackexchange.com/questions/616384/exploding-vanishing-gradients-deeper-understanding) ... so I guess the idea just confuses me.
| How does batch normalization enable larger learning rates (according to the original paper)? | CC BY-SA 4.0 | null | 2023-06-02T11:45:11.963 | 2023-06-02T23:07:50.183 | 2023-06-02T23:07:50.183 | 387314 | 387314 | [
"neural-networks",
"gradient-descent",
"batch-normalization"
] |
617665 | 1 | null | null | 0 | 3 | I am implementing the Wooldridge Two Way Mundlak regression and have an interacted model like so:
```
library(fixest)
reg = fepois(y ~ post_treatment:i(var1,var2,ref=0, ref2=0)|idvar+timevar, cluster="idvar", data=regdata)
```
And I'd like to recover the hazard rate impact of `post_treatment`, including standard errors, but I can't figure out how to do it building from `marginaleffects::avg_slopes`. Is there a way to do it through this or another approach?
| Recovering hazard rate for interacted Poisson model in R | CC BY-SA 4.0 | null | 2023-06-02T12:56:40.250 | 2023-06-02T12:56:40.250 | null | null | 4173 | [
"r",
"poisson-regression",
"hazard"
] |
617666 | 1 | null | null | 1 | 8 | I have two GAMs fitted with a Gamma distribution, with the same model structure with a continuous response variable and one continuous covariate, two categorical covariates, and one random effect:
```
Family: Gamma
Link function: log
Formula:
feeding_t ~ s(month, bs = "cc", k = 12) + pop_id_cat + status_cat + s(animals_id, bs = "re")
```
The only difference is the input dataset, and my goal is to compare the two models in the same plot. And I would like to, make sure I understand exactly what is happening in the three different ways of plotting it (1 - "basic" plot with the partial effects, 2 - a plot with the y-axis in the response variable scale, and 3 - a plot with predicted values).
The summary for the first model (Dataset A) is:
```
Parametric coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 1.13568 0.04200 27.037 < 2e-16 ***
pop_id_catP2 -0.32086 0.10830 -2.963 0.003066 **
pop_id_catP3 -0.21046 0.06111 -3.444 0.000579 ***
pop_id_catP4 -0.12059 0.05760 -2.094 0.036345 *
pop_id_catP5 -0.16188 0.10134 -1.597 0.110256
pop_id_catP6 -0.22993 0.05509 -4.173 3.06e-05 ***
pop_id_catP7 -0.19185 0.05620 -3.414 0.000646 ***
pop_id_catP8 0.12648 0.10834 1.167 0.243128
pop_id_catP9 0.12449 0.06822 1.825 0.068114 .
status_cata_m -0.15241 0.04149 -3.673 0.000242 ***
status_catfam -0.21209 0.04192 -5.059 4.38e-07 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Approximate significance of smooth terms:
edf Ref.df F p-value
s(month) 5.451 10 3.698 1.16e-05 ***
s(animals_id) 52.394 97 1.407 < 2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
R-sq.(adj) = 0.0948 Deviance explained = 10.8%
-REML = 6799.6 Scale est. = 0.33249 n = 4334
```
And here's the summary for the second model (Dataset B) is:
```
Parametric coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 1.10106 0.04263 25.829 < 2e-16 ***
pop_id_catP2 -0.27877 0.10781 -2.586 0.009752 **
pop_id_catP3 -0.16630 0.06170 -2.696 0.007054 **
pop_id_catP4 -0.10823 0.05811 -1.862 0.062608 .
pop_id_catP5 -0.15529 0.10309 -1.506 0.132050
pop_id_catP6 -0.19017 0.05513 -3.450 0.000567 ***
pop_id_catP7 -0.15379 0.05646 -2.724 0.006482 **
pop_id_catP8 -0.03964 0.10407 -0.381 0.703308
pop_id_catP9 0.09500 0.06850 1.387 0.165548
status_cata_m -0.16370 0.04185 -3.911 9.33e-05 ***
status_catfam -0.20772 0.04384 -4.738 2.22e-06 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Approximate significance of smooth terms:
edf Ref.df F p-value
s(month) 4.983 10 2.773 0.00016 ***
s(animals_id) 50.020 97 1.355 < 2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
R-sq.(adj) = 0.0757 Deviance explained = 8.49%
-REML = 6948.2 Scale est. = 0.3626 n = 4338
```
Now, this is the output of plotting the two models in the same plot (following this), without transforming the y-axis:
[](https://i.stack.imgur.com/qClR3.png)
Question 1: Is it correct to say here that the y-axis shows the several "coefficient" values of the covariate month (without the intercept and for the reference categories of the two categorical variables)? If I understand correctly, the GAM summary does not provide a specific coefficient for a smooth because there are several values (due to the non-linearity). If not, what does each value on the y-axis reflect?
---
Then, if I plot the same model outputs, but in the original scale of the response variable (it's easier to interpret and shows the actual effect of a given variable on the response variable), this is the output (following [this post](https://stats.stackexchange.com/questions/166553/creating-marginal-plots-when-using-mgcv-gam-package-in-r), [this one](https://stats.stackexchange.com/questions/531758/interpretation-of-parametric-coefficients-in-gam), and [partially this one](https://stats.stackexchange.com/questions/615420/gam-parametric-coefficients-what-is-mgcvizpterm-actually-plotting)):
```
ci_a <- confint(modelA, parm = "s(month)", type = "confidence") |>
mutate(est = exp(est + coef(modelA)[1L]),
lower = exp(lower + coef(modelA)[1L]),
upper = exp(upper + coef(modelA)[1L]))
ci_b <- confint(modelB, parm = "s(month)", type = "confidence") |>
mutate(est = exp(est + coef(modelB)[1L]),
lower = exp(lower + coef(modelB)[1L]),
upper = exp(upper + coef(modelB)[1L]))
ci_a$model <- "Dataset_A"
ci_b$model <- "Dataset_B"
ci <- rbind(ci_a, ci_b)
```
[](https://i.stack.imgur.com/VvnzN.png)
Question 2) Assuming that the plot is done correctly, I would interpret this as, e.g., in May the feeding time increases the most, up to a mean (I'm not sure if it's a mean here?) of 3.4 days, while the lowest value for feeding time is observed in July-August, with a decrease of ~0.6 days. Is this a correct interpretation?
However, in this plot, the values on the y-axis also seem to be a bit higher than the average values for the response variable (see below), and also higher than the predicted values (see below) - so this is the reason I'm not so sure about this plot/interpretation...
```
summary(datasetA$feeding_t)
Min. 1st Qu. Median Mean 3rd Qu. Max.
0.2908 1.2504 2.0004 2.3640 3.0004 11.2507
summary(datasetB$feeding_t)
Min. 1st Qu. Median Mean 3rd Qu. Max.
0.1468 1.1875 2.0000 2.3075 3.0000 11.2507
```
---
Finally, the third option is to plot the predicted values, for which I used [this package](https://vincentarelbundock.github.io/marginaleffects/articles/gam.html):
```
marginaleffects::plot_predictions(modelA, condition = "month")
```
[](https://i.stack.imgur.com/gHEbE.png)
Question 3) The pattern here is the same as before but the y-axis doesn't match the previous plot, which I thought it would, since it's predictions on the same dataset. However, the values in the y-axis do match the `summary()` information above, if the y-axis is showing the mean values.
Here, would this be a correct interpretation: "In May, the predicted feeding time is ~2.15 days, while in middle July the predicted feeding time is the lowest, at 1.85 days"? And why do they differ so much from the previous plot?
Additionally, the predicted dataset seems to include values for all categories within the categorical variables - and I was expecting this to be the marginal plots with all the other covariates set to 0 (or reference category).
And when I try to plot two models together, I get this "funny" output:
```
pred1 <- marginaleffects::predictions(modelA)
pred2 <- marginaleffects::predictions(modelB)
pred1_ <- as.data.frame(pred1)
pred2_ <- as.data.frame(pred2)
pred1_$dataset <- "dataset_A"
pred2_$dataset <- "dataset_B"
pred_all <- rbind(pred1_, pred2_)
head(pred_all)
rowid estimate std.error statistic p.value conf.low conf.high feeding_t month pop_id_cat
1 1 2.520600 0.2821735 8.932803 4.153502e-19 1.967550 3.073650 2.241667 1 P5
2 2 2.520600 0.2821735 8.932803 4.153502e-19 1.967550 3.073650 5.243056 1 P5
3 3 2.666947 0.2946896 9.050019 1.429434e-19 2.089366 3.244528 1.000000 4 P5
4 4 2.666947 0.2946896 9.050019 1.429434e-19 2.089366 3.244528 1.676389 4 P5
5 5 2.666947 0.2946896 9.050019 1.429434e-19 2.089366 3.244528 3.000694 4 P5
6 6 2.666947 0.2946896 9.050019 1.429434e-19 2.089366 3.244528 1.675694 4 P5
status_cat animals_id dataset
1 a_m 1 dataset_A
2 a_m 1 dataset_A
3 a_m 1 dataset_A
4 a_m 1 dataset_A
5 a_m 1 dataset_A
6 a_m 1 dataset_A
pred_all %>% ggplot(aes(x = month, y = estimate)) +
geom_ribbon(aes(ymin = conf.low, ymax = conf.high, fill = dataset), alpha = 0.2) +
geom_line(aes(colour = dataset)) +
theme_bw() + theme(legend.position = "bottom")
```
[](https://i.stack.imgur.com/z9pua.png)
Any suggestions?
Thank you very much in advance for any help!
| GAM plots: partial effects, shifted y-axis, or predictions - which representations/interpretations are correct/accurate? | CC BY-SA 4.0 | null | 2023-06-02T13:08:26.080 | 2023-06-02T13:08:26.080 | null | null | 117281 | [
"data-visualization",
"generalized-additive-model",
"gamma-distribution",
"intercept",
"marginal-effect"
] |
617667 | 1 | null | null | 0 | 5 | I am currently working with the R package "gsynth" based on the publication "Generalized Synthetic Control Method: Causal Inference with Interactive Fixed Effects Models" by Xu (2017).
See also: [https://cran.r-project.org/web/packages/gsynth/gsynth.pdf](https://cran.r-project.org/web/packages/gsynth/gsynth.pdf)
According to this, X represents time-varying covariates. I am not sure if I understand this correctly. Do I just need to use any known variables that vary over time? What do I have to pay attention to?
[](https://i.stack.imgur.com/5yQ8C.jpg)
| Understanding the R package "gsynth | CC BY-SA 4.0 | null | 2023-06-02T13:12:15.743 | 2023-06-02T13:19:10.833 | 2023-06-02T13:19:10.833 | 388706 | 388706 | [
"panel-data",
"causality",
"difference-in-difference",
"synthetic-controls"
] |
617668 | 1 | null | null | 0 | 18 | I've spent last several weeks learning about survival analysis, see one of the last posts at [How to simulate variability (errors) in fitting a gamma model to survival data by using a generalized minimum extreme value distribution in R?](https://stats.stackexchange.com/questions/616872/how-to-simulate-variability-errors-in-fitting-a-gamma-model-to-survival-data-b)
Now I am primarily concerned with simulating death rates and secondarily deriving survival curves for the deceased. Ultimately, this is leading towards simulating deaths/survival using extreme value distributions with heavy right tails (even if not best-fitting) for simulating conservative, very-bad-case scenarios especially when dealing with a paucity of data. The code below is a first step in that direction.
Does the approach I describe and per the code below appear reasonable? If so, are there easier or better approaches?
- I use the lung dataset from the survival package as my example.
- I use bootstrap sampling (bootSample() in code below) to derive death rates (deathRate <- ...) and to extract the lung data for only the deaths from the same bootstrapped samples where "status" == 2 (bootDeaths[[i]] <<- ...).
- Using AIC, lognormal provided the best fit for bootstrap sampled death rates. Code not shown for this goodness-of-fit testing.
- I draw lognormal random samples for each of the bootstrap samples and derive a histogram of death rates per the image below on the left.
- I then take the deaths from the same bootstrapped samples and fitting the deaths with the survreg() function and applying the lognormal distribution, plot their survival curves (plot_survival_curves(...)) as shown in the image below on the right.
[](https://i.stack.imgur.com/HY4Gq.png)
Code:
```
library(MASS)
library(survival)
nbr <- 100
timeLine <- seq(0, max(lung$time))
bootDeaths <- list()
# Use bootstrapping for both average death rates and for plotting survival curves for deaths
bootSample <- sapply(
1:100,
function(i) {
sampleData <- lung[sample(nrow(lung), replace = TRUE), ]
bootDeaths[[i]] <<- sampleData[sampleData$status == 2, ] # used in plotting death survival curves later
deathRate <- with(sampleData, mean(status == 2))
return(deathRate)
}
)
### Generate random samples for the lognormal distribution, calculate and plot death rates ###
fit <- MASS::fitdistr(bootSample,"lognormal")
params <- fit$estimate
sampLognorm <- rlnorm(1000, params[1], params[2])
hist(sampLognorm, breaks = "FD", col = "steelblue",
xlab = "Death rate", ylab = "Frequency", main = "Histogram of Lognormal Samples")
sampDeathRate <- mean(bootSample)
abline(v = sampDeathRate, col = "black", lty = 1, lwd = 3)
popDeathRate <- with(lung, mean(status == 2))
abline(v = popDeathRate, col = "red", lty = 1, lwd = 3)
legend("topright", legend = c(paste("Sample Average:", round(sampDeathRate, 4)),
paste("Population Average:", round(popDeathRate, 4))),
lty = c(1,1), lwd = c(3,3), col = c("black", "red"), bty = "n")
### Lognormal survival curves for patients who die ###
plot(timeLine, type = "n", xlab = "Time", ylab = "Survival Probability", main = "Lung Data Survival Plot", ylim = c(0, 1), xlim = c(0,max(lung$time)))
# Fit lognormal distribution and plot survival curves for each deceased sample
plot_survival_curves <- sapply(
1:nbr,
function(i){
sampleDat <- data.frame(bootDeaths[[i]])
fit <- survreg(Surv(time, status == 2) ~ 1, data = sampleDat, dist = "lognormal")
meanlog <- fit$coef
sdlog <- fit$scale
surv_prob <- 1 - plnorm(timeLine, meanlog = meanlog, sdlog = sdlog)
lines(seq(0,length(surv_prob)-1), surv_prob, col = "lightblue", lty = "solid", lwd = 0.25)
}
)
```
| Does this approach to simulation for survival analysis, of breaking the analysis into deaths versus survivors, appear reasonable? | CC BY-SA 4.0 | null | 2023-06-02T13:22:01.337 | 2023-06-02T15:34:50.610 | null | null | 378347 | [
"r",
"survival",
"simulation",
"lognormal-distribution",
"extreme-value"
] |
617669 | 1 | null | null | 0 | 6 | I would like to calculate the required sample size for a crossover design based on a study conducted in a parallel-design. I understand that one needs the within subject CV. Unfortunately, the authors only report M(SD) for each group and each time point (treatment Baseline, Control baseline, ...).
Is there any way to roughly estimate the intrasubject CV or is there any measure that I can substitute the CV with?
Also, I am thinking about contacting the authors to ask for the CV - am I right in my understanding that I just need the CV from the treatment group?
| Intrasubject coefficient of variation for cross-over sample size calculation | CC BY-SA 4.0 | null | 2023-06-02T13:23:40.063 | 2023-06-02T13:23:40.063 | null | null | 277811 | [
"standard-deviation",
"sample-size",
"statistical-power",
"effect-size",
"crossover-study"
] |
617670 | 1 | null | null | 0 | 11 | Suppose that I have the model
$$\mathbf{y}\sim N(\mathbf{X}\boldsymbol{\beta} + \mathbf{K}\boldsymbol{\alpha} + \mathbf{H}\boldsymbol{\theta},\sigma^2\mathbf{I}),$$
where the first column in the covariate matrix $\mathbf{X}$ is ones (i.e., the intercept column), $\mathbf{H}$ and $\mathbf{K}$ are known mapping matrices, and $\boldsymbol{\beta},\boldsymbol{\alpha},\boldsymbol{\theta},\sigma^2$ are the model parameters. If I were to add the constraint that $\mathbf{K}\boldsymbol{\alpha}$ is orthogonal to the intercept, I can use conditioning by Kriging such that
- I update $\boldsymbol{\alpha}$ via standard Gibbs updates. Let's denote the update $\boldsymbol{\alpha}^*\sim N(\boldsymbol{\mu}_\alpha,\boldsymbol{\Sigma}_\alpha)$, where $\boldsymbol{\mu}_\alpha \text{ and } \boldsymbol{\Sigma}_\alpha$ are obtained via full-conditionals (i.e., if we use a multivariate normal prior on $\boldsymbol{\alpha}$).
- Then, I solve $\boldsymbol{\alpha}=\boldsymbol{\alpha}^* - \boldsymbol{\Sigma}_\alpha(\mathbf{K}^T\mathbf{K})^{-1}\mathbf{K}^T\boldsymbol{1}\left(((\mathbf{K}^T\mathbf{K})^{-1}\mathbf{K}^T\boldsymbol{1})^T \boldsymbol{\Sigma}_\alpha (\mathbf{K}^T\mathbf{K})^{-1}\mathbf{K}^T\boldsymbol{1}\right)^{-1}((\mathbf{K}^T\mathbf{K})^{-1}\mathbf{K}^T\boldsymbol{1})^T\boldsymbol{\alpha}^*$
Now, let's suppose that I also want to constrain $\mathbf{H}\boldsymbol{\theta}$ to be orthogonal to the intercept column. To me, it seems like I cannot just do the same two-step procedure for updating $\boldsymbol{\theta}$. Instead, I anticipate that I should use just one constraint for both $\mathbf{K}\boldsymbol{\alpha}$ and $\mathbf{H}\boldsymbol{\theta}$ to be orthogonal to the intercept, but I'm unsure of how to do this. Any help is greatly aprpeciated.
| How can I use conditioning by Kriging for two separate vectors? | CC BY-SA 4.0 | null | 2023-06-02T13:23:59.773 | 2023-06-02T13:23:59.773 | null | null | 257939 | [
"bayesian",
"constrained-regression",
"conditioning",
"kriging"
] |
617671 | 1 | null | null | 0 | 3 | I have a dataset with categorical data. Each row of data can be either a success, a failure or neutral. How do I determine what categorical data has a higher chance of producing a success? I'm looking for both single columns (for instance, values a1 and a2 from column A yield a higher chance, with of course some kind of metric like a p-value to determine how certain the success chance is) and combinations (for instance, the combination of a5, b3 and c2 have the highest chance of success).
I already tried some stuff like a chi squared test between the result column and individual columns, and in a column determining which values are different than the rest with a t-test, but I'm not sure if there are other (better) techniques I should be using.
| How to determine what combination of values in categorical data have the highest success chance | CC BY-SA 4.0 | null | 2023-06-02T13:37:22.283 | 2023-06-02T13:37:22.283 | null | null | 388715 | [
"categorical-data"
] |
617672 | 2 | null | 617644 | 0 | null | The initial AIC value should not matter by itself. Try adding a randomly generated variable (or a few, one by one). Does the behavior persist? That would be worrisome. If not, the variables you have tried so far just happen to have explanatory power for the dependent variable.
| null | CC BY-SA 4.0 | null | 2023-06-02T13:46:02.783 | 2023-06-02T13:46:02.783 | null | null | 53690 | null |
617673 | 1 | null | null | 0 | 7 | I am currently using a train-validation-test split and Bayesian Optimization (BO), which is straightforward. Now, I want to transition to using 3-fold cross validation and BO (+an additional test set for final evaluation). However, I see that there are two ways to do that:
- BO inside cross validation: Inside cross validation, apply BO three times, one for each foldset. The issue here is that this will create three different sets of optimal hyper-parameters (one for each foldset) and it is unclear how to combine them.
- Cross validation inside BO: The entire cross-validation procedure is encapsulated inside the black box function optimized by BO. This will create one optimal set of hyper-parameters.
So, is any of the above correct?
| How to combine Bayesian Optimization and Cross Validation | CC BY-SA 4.0 | null | 2023-06-02T14:09:25.950 | 2023-06-02T14:09:25.950 | null | null | 307304 | [
"cross-validation",
"bayesian-optimization"
] |
617675 | 1 | null | null | 0 | 11 | I was experimenting with [tagtime](https://github.com/tagtime/TagTime), which randomly asks the user what they're doing at a known mean rate $\lambda$. Let's say that every time I am sampled, I give a yes/no answer. If I answer yes $k$ times within some period, then I supposedly spent $k\lambda$ in the "yes" state, but clearly this is the mean of the probability distribution of what I was actually doing, since I could have actually spent anywhere from an infinitesimal to an infinite amount of time in "yes." What is the probability distribution of how much time I actually spent in "yes?"
I know a few of the distributions related to this, e.g. that the time between samples follows an exponential distribution with parameter $\lambda$; but not enough to answer my own question.
| Probability distribution of actual time spent if randomly sampled at a known mean rate | CC BY-SA 4.0 | null | 2023-06-02T14:14:59.580 | 2023-06-02T14:16:08.167 | 2023-06-02T14:16:08.167 | 389428 | 389428 | [
"poisson-distribution",
"gamma-distribution",
"exponential-distribution"
] |
617676 | 2 | null | 507357 | 0 | null | In general, even highly correlated features can carry independent information. Consider the case where we have a target variable that simply represents the equality of two variables $X_1$ and $X_2$. No matter how highly correlated those variables are, it is impossible to predict the target at a rate better than random by using only one of the variables.
The trouble is, it's hard to know how much the "unique" signal in each variable is actually contributing to the target value. If the target variable is determined only by $X_1$, you'd be justified in dropping all other variables even if uncorrelated, but as the counterexample shows, sometimes you can't even drop the highly correlated features and expect to maintain predictive performance. I don't think this is anything unique to binary classification problems, it should extend to multi-class and regression problems as well - similarity in the input feature space doesn't preclude the possibility that the "independent parts" of imperfectly but even highly correlated features can be useful in predicting the target.
| null | CC BY-SA 4.0 | null | 2023-06-02T14:16:18.423 | 2023-06-02T14:16:18.423 | null | null | 76825 | null |
617677 | 1 | null | null | 0 | 17 | Please check that my understanding of hypothesis testing, confidence intervals, and their relation to the prior on population mean $\mu$ is correct.
Let $X_i\sim N(\mu, \sigma^2)$ be IID samples for $i = 1, ..., n$, (or $X_i$ any iid random variables with mean, variance $\mu, \sigma^2$). Suppose we don't know $\mu$ but we do have $\sigma$, and we want to reason about $\mu$.
We know that $\bar{X} = \sum_i X_i/n$, the sample mean follows $N(\mu, \sigma^2/n)$ (or in the non-normal $X_i$ case, that for large $n$ it does approximately).
For hypothesis testing, we are using this fact to make a statement about $\bar{X}$ conditioned on $\mu$ and $\sigma$. Suppose a null hypothesis, say $\mu = 0$. Then we know $P(\bar{X}|\mu,\sigma)$ is $N(\mu, \sigma^2)$, so if $\bar{X}$ lies in the tails, we reject the assumption. The point is that the reasoning for hypothesis testing follows from the distribution on $\bar{X}|\mu, \sigma$.
For confidence intervals on $\mu$, the situation is flipped with Bayes' rule, and requires a prior $P(\mu|\sigma)$ to be specified up to a constant. To see this, first recall the process: e.g., we are 95% confident that $\mu\in [\bar{X} - \beta\sqrt{ \sigma/n}, \bar{X} + \beta\sqrt{ \sigma/n}]$ for $\beta = F^{-1}( (1-.95)/2) \approxeq 1.96$. So it seems to me that we are finding probabilities of $\mu$ conditioned on $\bar{X}, \sigma$, a flip of they hypothesis testing framework with Bayes' Rule.
To write this rigorously, $P(\mu | \bar{X}, \sigma) \propto P(\bar{X}|\mu, \sigma) P(\mu|\sigma)$. Since $P(\bar{X}|\mu, \sigma)$ is $ N(\mu, \sigma^2/n) $, the confidence interval formulation requires that prior $P(\mu |\sigma)$ is 1 (an improper prior).
Is this a correct formulation of the math underpinning confidence intervals of the population mean under normality assumptions?
---
Related question: [Trouble relating the Central Limit Theorem to confidence intervals](https://stats.stackexchange.com/questions/371067/trouble-relating-the-central-limit-theorem-to-confidence-intervals)
Answer in the related questions shows a formulation of CI that is only based on the normal distribution of $\bar{X}$, not using prior. This is worth understanding, but doesn't answer the question above.
| A correct understanding of z-score confidence intervals with improper, informative priors | CC BY-SA 4.0 | null | 2023-06-02T14:21:14.807 | 2023-06-02T14:50:26.443 | 2023-06-02T14:50:26.443 | 92660 | 92660 | [
"hypothesis-testing",
"bayesian",
"confidence-interval",
"mean",
"prior"
] |
617678 | 2 | null | 515749 | 0 | null | this is old but I thought I'd leave the information here in case someone else bumps into it while searching. You could use the modified BG/NBD, which does account for the possibility that customers can drop out (never purchase again) after the first purchase
| null | CC BY-SA 4.0 | null | 2023-06-02T14:39:08.210 | 2023-06-02T14:39:08.210 | null | null | 389430 | null |
617679 | 1 | null | null | 0 | 8 | In ([https://becarioprecario.bitbucket.io/inla-gitbook/ch-smoothing.html#sec:smoothterms](https://becarioprecario.bitbucket.io/inla-gitbook/ch-smoothing.html#sec:smoothterms)), they show an example of a Random Walk 2 (RW2) prior being used on the LIDAR dataset. For the model set-up, we have that each log-ratio value, $y$, is observed at position $x$. The goal of the regression is to estimate $f(x)$ where $y\sim N(f(x),\sigma^2)$. They place a RW2 prior on $f(x)$, which for the case of regularly spaced $x$ is:
$$
f(x_{i+1})-2f(x_i)+f(x_{i-1})\sim N(0,\tau^2)
$$
My question concerns the hyperparameter $\tau^2$. If $f$ is $f(x)=x^2$, then would $\tau^2$ be estimable? In the case of $x^2$, if I evaluate the finite second derivatives with fixed spacing, then I should find that the value is constant across the grid. I do not understand the role of $\tau^2$ in this case because the second derivative is constant at all evaluation points.
I set up a simulation according to the description above and attempted to estimate a RW2 model on regularly spaced data. In the result from the Stan fit, $\sigma=2.9$, and $\tau=0.0025$. The $\sigma$ parameter is correctly estimated. The model does not appear to have converged though, and I suspect either I did not write the model correctly, or it needs to be reformulated.
```
data {
int<lower=0> N;
vector[N] y;
}
parameters {
vector[N] mu;
real<lower=0> sigma;
real<lower=0> tau;
}
model {
vector[N-2] s = rep_vector(0.0, N-2);
s = mu[1:(N-2)] - 2*mu[2:(N-1)] + mu[3:(N)];
target += normal_lpdf(y|mu,sigma);
target += normal_lpdf(s|0,tau);
target += normal_lpdf(log(sigma) | 1,3);
target += normal_lpdf(log(tau) | 0,3);
}
```
```
set.seed(123)
xlim=3;delta=0.01
x=seq(-xlim,xlim,delta);sdParm=3#sqrt(1/6)
y=x^2+rnorm(length(x),sd = sdParm)
stanModel=cmdstanr::cmdstan_model(stan_file = 'RW2Demo.stan',exe_file = 'RW2.exe')
stanData=list(y=y,N=length(y),delta=delta)
fit=stanModel$sample(data=stanData,parallel_chains = 4)
fittedTab=fit$summary()
par(mfrow=c(1,2),mai=c(0.75,0.25,0.25,0.25))
plot(x,y,ylab='Observed Values',ylim=c(-5,10))
lines(x,x^2,col='red')
plot(x,fittedTab[2:(2+length(y)-1),]$mean,ylab='Fitted Values',ylim=c(-5,10))
lines(x,x^2,col='red')
```
| Estimation with Random Walk 2 Priors | CC BY-SA 4.0 | null | 2023-06-02T14:41:34.673 | 2023-06-02T14:41:34.673 | null | null | 311086 | [
"regression",
"markov-chain-montecarlo",
"random-walk",
"stan",
"inla"
] |
617680 | 1 | 617726 | null | 1 | 31 | I have some time series data that shows a building indoor temperature subject to outside temperature. When the heating/cooling is turned off, the building slowly cools down (or heats up) towards the outside temperature. Now i was able to filter these periods in the time series. The data looks like the one in the picture. What would it be the best approach to fit a model to this data? collect different time series of the exponential decay (cooling down) and fit as many models into these time series, proceeding then to somehow aggregate the results into a model that can predict based on an initial temperature how long it'd take to get to x degrees for example or to keep it as one and fit a model? please advise. Thanks in advance!
Edit: I want to find out how long it takes for the building to cool down in this case, provided the AC is providing no heating. The temperature should converge to the outside temperature and it does so in an exponential way. different outside temperatures will dictate the rate of the exponential curve that best models the data and I'm wondering how to best tackle this in practice.
[](https://i.stack.imgur.com/4aMVP.png)
| Temperature time series data — best analysis approach | CC BY-SA 4.0 | null | 2023-06-02T14:42:06.813 | 2023-06-02T21:20:32.693 | 2023-06-02T18:20:59.667 | 298094 | 298094 | [
"time-series"
] |
617681 | 1 | null | null | 0 | 38 | currently I am trying to implement a prototype for the following problem.
I have data for machines, which sends me how long they have operated in seconds. Further, they have one sensor, which might have a value. So it would look like this
```
Duration Sensor Value
37 - -
31 se1 A
12 - -
29 se1 A
140 se1 A,B,C
```
normally, I would expect the sensor to have small variation, but the longer the duration is, the more variation would be expected. In my toy example, I would expect my sensor se1 to have 1 value for average duration, but it would be ok to have 3 distinct values if the duration is significantly longer.
Now, I would like to model it as a Bayesian problem
```
X := number of distinct values for sensor se1
Y := duration length in seconds
```
`P(X = x | Y = y)` would be my inference such as "`how probable is it to get 3 distinct values for a duration of 140 seconds?`"
My approach is
- from the full dataset estimate P(X) e.g. via scipy.fit()
- from the full dataset estimate P(Y) e.g. via scipy.fit()
- now filter the dataset, such that only observations of se1 are in the filtered set. Consider it as evidence and estimate P(Y | X) from it.
- use Bayes Theorem to calculate P(X | Y)
I am not so sure about the 3)
Do I have to filter for "se1 present" or do I have to filter for "se1 has 1 distinct value" then fit, "se1 has 2 distinct values" and fit again, etc.?
| Bayesian Inference: Conceptual question to get evidence | CC BY-SA 4.0 | null | 2023-06-02T14:46:44.583 | 2023-06-02T16:39:01.897 | null | null | 112108 | [
"probability",
"distributions",
"bayesian",
"scipy"
] |
617682 | 1 | null | null | 1 | 21 | I have data that is reasonably assumed to be iid samples from some distribution. Our goal is to put a confidence interval on the population mean and have something similar for the population variance. Notationally, we have IID $X_i, i = 1, ..., n$ with mean $\mu$, variance $\sigma^2$ unknown. Sample size $n$ varies from 200 to 20,000.
Plotting my data, it is trimodal, so definitely does not seem to be coming from a normal distribution.
Computing confidence intervals on the sample mean $\bar{X_n}$ is no problem. I'm just not sure how much to trust them.
Below is my planned diagnostics. Can you tell me if it make sense?
Here my reasoning is to see if many samples of $\bar{X_{n-1}}$ does indeed follow a normal to gain confidence that our $n$ is large enough that the central limit theorem approximation is valid. I can compute the sample mean $\bar{X_{n-1}}$ when I hold out one data point. That gives $n$ different samples of sample means $\bar{X_{n-1}}$. I can plot those and see if they follow a normal, or perform a Q-Q plot against a normal with standard deviation the computes standard error ($S_n/\sqrt{n}$) to see if they are close to normal. If so, then it is evidence that $\bar{X_n}$ will be normal and the confidence interval computed with variance $S_n^2/n$ is valid. Does this sound like a sound method?
Are there other diagnostics to gain confidence in the confidence intervals that are perhaps better?
| Techniques/diagnostics for gaining confidence in normality assumptions and resulting confidence intervals | CC BY-SA 4.0 | null | 2023-06-02T14:48:41.363 | 2023-06-03T11:36:30.080 | null | null | 92660 | [
"confidence-interval",
"mean",
"central-limit-theorem",
"diagnostic"
] |
617683 | 1 | null | null | 0 | 38 | [](https://i.stack.imgur.com/3zNe0.png)I conducted a content analysis on social media posts regarding CEO communication about sociopolitical topics. I have two levels of analysis entities: 1) the social media account of a CEO, 2) the social media posts per account.
One question I wanted to examine (on the level of 1) social media accounts) is whether the affiliation of an account to a specific industry does correlate with the number of sociopolitical posts - in other words, do the CEO social media accounts of specific industries have more sociopolitical posts than others?
I conducted a one-way ANOVA with "industry of account" as my independent variable, which is a nominal variable (codes "1" to "7§), and the "number of sociopolitical posts" as my explained variable, which is continuous (no codes, but frequency numbers, ranges from 1 to 76).
That's the code I used:
```
data$industry <- as.factor(data$industry)
anova <- aov(post_sp ~ industry, data = data)
etaSquared(anova, anova = TRUE)
pairwise.t.test(data$post_sp, data$industry)
```
Everything worked well except for the pairwise Posthoc test, I didn't receive any data. That's when I asked myself if it's even possible to conduct an ANOVA with the variable "number of sociopolitical posts", or if I have to recode it somehow.
| Is it possible to conduct ANOVA with a frequency variable? | CC BY-SA 4.0 | null | 2023-06-02T13:59:26.993 | 2023-06-03T10:28:28.940 | 2023-06-03T10:28:28.940 | 389497 | 389497 | [
"r",
"anova"
] |
617684 | 1 | null | null | 3 | 36 | I have data that is reasonably assumed to be iid samples from some distribution. Our goal is to put a confidence interval on the population variance Notationally, we have IID $X_i, i = 1, ..., n$ with mean $\mu$, variance $\sigma^2$ unknown. Sample size $n$ varies from 200 to 20,000. Let $S_n^2 = \frac{\sum_i (X_i -\bar{X}_n )^2}{n-1}$, the sample variance.
Plotting my data, it is trimodal, so definitely does not seem to be coming from a normal distribution. Computing confidence intervals on the sample mean $\bar{X_n}$ is no problem, and I have planned diagnostics to make sure the sample mean distribution is normal with variance near $S_n^2/n$.
How can I create a confidence interval around $\sigma$ based on $S_n$ and what are the diagnostics to gain confidence that it is correct?
---
Related questions:
- What is the name of the distribution of unbiased sample variance for a sample from Gaussian distribution?
- Chi-squared confidence interval for variance
Both 1., 2. are concerned with the case where $X_i$ are IID normally distributed, but I don't have that almost certainly.
- How to prove $(\hat{X}-\mu)/(\hat{S}/\sqrt{n})$ is student t with $n-1$ degrees of freedom if $X_i$ are iid $N(\mu, \sigma)$?
Answer to q.3. has an interesting technique in the answer proof of writing $S_n^2 = \sum_{i = 1}^{n-1}Y_i^2$ for iid positive $Y_i$. Seems like applying CLT to this fact will provide a solution.
---
| How to get confidence interval for the sample variance? | CC BY-SA 4.0 | null | 2023-06-02T15:01:48.700 | 2023-06-02T21:58:23.097 | null | null | 92660 | [
"confidence-interval",
"variance",
"standard-error",
"diagnostic"
] |
617685 | 2 | null | 617668 | 0 | null | One big problem comes to mind: even without censored observations, there is no single "death rate" (e.g., deaths per person at risk per year) unless there's an exponential survival curve. The hazard function is the continuous death rate over time, and it's far from constant over time for a lognormal distribution; see the [NIST page](https://www.itl.nist.gov/div898/handbook/eda/section3/eda3669.htm) for example plots of lognormal hazard functions.
Without an exponential survival curve, any "death rate" based on dividing a number of deaths by a number at risk will depend on the particular time window involved. What exactly do you mean by a "death rate" in this context? How would you apply it in practice?
In your situation this problem is exacerbated by omitting right-censored observations from the calculations. That necessarily introduces bias into the estimate, in a way that depends heavily on the censoring pattern in the underlying data. It's hard to think of a scenario in which omitting censored event times leads to anything other than trouble.
Survival analysis done properly allows for other than exponential survival curves while handling censored event times. It allows for predictions of survival at specific times of interest, so that if you are particularly concerned about, say, early events you can focus attention on them.
In terms of the sampling variability you want to capture (variability in survival-curve estimates arising from random sampling of the population) the covariance matrix of the coefficient estimates should contain the information you need.
If the data set is large enough and the form of the model is adequate, then the asymptotic multivariate normality of the estimates can be used directly without resampling. I showed that at the end of my [answer to the question you linked](https://stats.stackexchange.com/a/617154/28500), where resampling from the survival distribution gave essentially the same coefficient covariance as the original model's coefficient covariance matrix.
If the form of the model isn't adequate, then modeling on resampled cases might give an estimate of a type of variability: the variability in coefficient estimates in a model that doesn't properly fit the data. What's the point? You are better served by developing an adequate model.
If the data set isn't large enough, you have to consider whether to trust the model at all. Your statement about "paucity of data" is thus troubling. It might be more useful to present the actual scenario that you are interested in, as I fear that the "paucity of data" will limit what can be accomplished.
| null | CC BY-SA 4.0 | null | 2023-06-02T15:34:50.610 | 2023-06-02T15:34:50.610 | null | null | 28500 | null |
617686 | 2 | null | 617684 | 1 | null | Let's take a look at a bootstrap based approach, and compare the results to the CLT based confidence interval. First, I'll define a population distribution which is trimodal, skewed, and heavy tailed (compared to Normal).
```
rmydist <- function(n){
i <- sample(3, n, TRUE)
x1 <- rnorm(n)
x2 <- rgamma(n, 1.2, 0.5) +2
x3 <- rbeta(n, 1.8, 0.5)*3 - 4
x <- (i==1)*x1 + (i==2)*x2 + (i==3)*x3
return(x)
}
# Plot histogram with huge n
x <- rmydist(1e8)
hist(x, breaks=30)
```
[](https://i.stack.imgur.com/Mkqfa.png)
The "true" variance, based on this really huge sample from the population, is $\sigma^2 \approx 8.612$ (this matches the exact variance, which can be computed using the [Law of Total Variance](https://en.wikipedia.org/wiki/Law_of_total_variance)).
Now we can compute 95% confidence intervals using (i) the bootstrap, (ii) the [accelerated bootstrap](https://stats.stackexchange.com/questions/437477/calculate-accelerated-bootstrap-interval-in-r) and (iii) chi-square approximation for $n=200$.
```
# Generate data
set.seed(12345)
n <- 200
x <- rmydist(n)
# Confidence interval with (percentile) bootstrap
B <- 10000
boot <- rep(NA, B)
for(i in 1:B){
xnew <- sample(x, n, TRUE)
boot[i] <- var(xnew)
}
CI_1 <- quantile(boot, probs=c(0.025, 0.975))
# Confidence interval with accelerated bootstrap
#devtools::install_github("knrumsey/quack")
library(quack)
n <- 200
x <- rmydist(n)
a <- est_accel(x, var)
CI_2 <- boot_accel(x, var, alpha=0.05, a=a)
# Confidence interval based on CLT
chi <- qchisq(c(0.025, 0.975), n-1)
CI_3 <- (n-1)*var(x)/rev(chi)
```
The confidence interval for these three cases comes out to be
|Method |Lower bound |Upper bound |
|------|-----------|-----------|
|Bootstrap |$7.23$ |$11.75$ |
|Accelerated bootstrap |$6.47$ |$9.69$ |
|CLT |$6.46$ |$9.58$ |
Note that all 3 methods capture the "true value" of $8.612$ here. But one dataset isn't very interesting, so lets perform a simulation study.
---
## Simulation study
We can repeat the R analysis conducted above one thousand times for each of the three methods. We are interested in (i) empirical coverage (the number of times each method captures the true value) and (ii) the width of the confidence interval.
|Method |Empirical coverage |Interval width (average) |
|------|------------------|------------------------|
|Bootstrap |92.3% |4.3 |
|Accelerated bootstrap |92.8% |4.5 |
|CLT |84.3% |3.4 |
It is interesting to note that all of these methods undercover compared to nominal (95%), but the CLT based method is especially over-confident, yielding precise intervals which fail to capture the true value more often that it should.
| null | CC BY-SA 4.0 | null | 2023-06-02T15:55:58.050 | 2023-06-02T21:58:23.097 | 2023-06-02T21:58:23.097 | 126931 | 126931 | null |
617687 | 1 | null | null | 0 | 21 | I'm trying to explain a time-varying country-year count variable of an event (Y) with both time-varying and time-variant variables (X); I have data of a around 100 countries for 20 years (2000-2020). I'm running a negative binomial regression with random country intercepts; corresponding to the R formula `Y ~ X1 + X2 + Xn + (1 | country)`.
My understanding of the coefficients of X is that they contain both within and between country variation.
I'm being told that interpreting coefficients is not illustrative enough.
I thought a illustrative way of showing the impact of the explanatory variables would be to hold each individually at their values at the beginning of the observation period, predict Y from this modified data, and attribute the change in Y to the change in X (e.g. if X1 had stayed at its 2000 level, 50 Y less would have taken place in the countries included in the dataset, while if X2 had stayed at 2000 levels, 30 Y more would have taken place).
Is this procedure permissible and this interpretation correct? In particularly since my coefficients contain both within and between country variation, I'm wondering if predicting Y from X1 values held constant at 2000 will impose between-country variation on a within-country process.
| counterfactual prediction in country-year data | CC BY-SA 4.0 | null | 2023-06-02T16:00:41.830 | 2023-06-02T16:00:41.830 | null | null | 227723 | [
"predictive-models",
"count-data",
"counterfactuals"
] |
617688 | 1 | null | null | 0 | 12 | In a package to write linear regression models I have found the following description:
`dependent variable ~ exogenous variables + (endogenous variables ~ instrumental variables) + fe(fixedeffect variable)`
Can you explain the role of the various type of variables, in particular the difference between endogenous and exogenous variables, the instrumental and the fixedeffects ones ?
Are there cases where all are used together ?
| Role of different types of variables in a linear model | CC BY-SA 4.0 | null | 2023-06-02T16:03:25.383 | 2023-06-02T18:00:57.037 | 2023-06-02T18:00:57.037 | 121522 | 263905 | [
"mixed-model",
"fixed-effects-model",
"instrumental-variables",
"endogeneity"
] |
617689 | 1 | null | null | 0 | 11 | In the Rasch model, the reliability of separation can be calculated as 1 - (MSE / SD^2), where SD represents the standard deviation of person location measures on a logit scale, and MSE refers to the mean squared errors of item location measures.
What confused me is how to compute MSE and what it signifies. Is there a specific example or code?
| What does MSE mean in the calculation of separation reliability for rasch model? | CC BY-SA 4.0 | null | 2023-06-02T16:08:22.887 | 2023-06-02T16:47:46.077 | 2023-06-02T16:47:46.077 | 341034 | 341034 | [
"reliability",
"item-response-theory",
"rasch"
] |
617690 | 1 | null | null | 0 | 16 | I'm trying to interpret a coefficient of a glm model with the gamma family and the identity link function. The outcome is continuous, positive and right-skewed. Transformation did not yield a normal distribution.
My model looks like this:
`model_glm_gamma = glm(outcome ~ var1 + var2 + var3, data = data, family = Gamma(link = "identity"))`
`
Coefficients:
| |Estimate |Std. Error |t value |Pr(>t) |
||--------|----------|-------|------|
|Intercept |4.91 |0.58 |8.4 |<0.0001 |
|Var 1 |-0.81 |0.21 |- 3.7 |0.0002 |
|Var2 |-2.16 |0.74 |- 2.9 |0.0040 |
|Var3 |-0.33 |0.57 |- 0.5 |0.5564 |
How do I interpret the coefficients for var2 if it is -2.16 if var2 is categorical and has two levels 0 and 1.
A coefficient of -2.16 for var2 means that the predicted value of the response variable is expected to be 2.16 units lower for level 1 ocompared to level 0, while holding all other variables constant?
| How to interpret coefficients of a GLM (Gamma family with identity link) | CC BY-SA 4.0 | null | 2023-06-02T16:12:40.580 | 2023-06-02T21:25:22.493 | 2023-06-02T21:25:22.493 | 389433 | 389433 | [
"multiple-regression",
"gamma-distribution",
"link-function"
] |
617691 | 1 | null | null | 0 | 11 | My study is a retrospective one group pre-test post-test on multiple variables. Not every patient in the study had scores for all of the variables, so as long as a patient had both a pretest and a paired posttest, the pair was included. That left different paired totals for each variable. I'd like to conduct a paired t-test for each variable, but I am confused as how to determine normalcy (I'm assuming I should). If I'm correct, I should be using Shapiro–Wilk - all of the variables have totals in the low 20s, with one of them under 20. My question, is what value should I be conducting the Shapiro Wilk (if that is the correct test) on, the pre-test values, or the differences between the pre-test and post test values?
| How to determine normalcy for a one group pre-test post-test? | CC BY-SA 4.0 | null | 2023-06-02T16:26:52.843 | 2023-06-02T17:00:19.410 | 2023-06-02T17:00:19.410 | 362671 | 389436 | [
"normal-distribution",
"shapiro-wilk-test"
] |
617692 | 2 | null | 321996 | 0 | null | I am not sure but what if you consider as a model the (any) correlation coefficient r between X1 and X2 as a (continuous ?) function of realizations z1 of Z1. For example, let r(z1) = rexp[- az1], for an estimated parameter a. To verify such a "hypothesis" estimate first r for a given value z1. Then do it for other value z1 and so on...
If the data were not consistent with the model exp[ -az1] try other models such as, for example, r = r(1 - exp[- az1] ) or r = 2/pi(arctan(bz1)) or other. If, however the correlation coefficients turn out to be (statistically) constant over z1 then the independence probably will take place. If you got variance as a negative covariance then what would happen if you consider as the variance the absolute value of the covariance ?? I am not sure of above but was trying to help you. Jerzy F.
| null | CC BY-SA 4.0 | null | 2023-06-02T16:29:09.593 | 2023-06-02T16:29:09.593 | null | null | 389440 | null |
617693 | 1 | null | null | 0 | 17 | I have an imbalanced treatment group and a control group. I want to use Matchit in R to calculate propensity scores and get balanced groups.
However, there are several parameters I have to set in the Matchit package, for example, method, ratio, distance, and caliper. Each of these parameters has several options, for example, I can set method="nearest", "optimal", "genetic", distance="logit", "rpart", "nnet".
How can I choose from these options to get the most balanced groups (maybe indicated by the smallest Std. Mean Diff.)? There would be hundreds of combinations of these parameters.
| How can I get the most balanced groups using matchit in R? | CC BY-SA 4.0 | null | 2023-06-02T16:35:19.587 | 2023-06-02T16:35:19.587 | null | null | 366864 | [
"r",
"propensity-scores",
"matching"
] |
617694 | 1 | null | null | 0 | 13 | [Null Hypothesis Significance Test (NHST)](https://en.wikipedia.org/wiki/Statistical_hypothesis_testing) is a class of statistical tests that comes from blending procedures from Fisher’s Significance Test and Neyman & Pearson’s Hypothesis Test (see, e.g., [Lew (2020)](https://link.springer.com/chapter/10.1007/164_2019_286#Sec4)). Many have derided its mere existence (see [Cohen (1994)](https://www.sjsu.edu/faculty/gerstman/misc/Cohen1994.pdf)), but it is nonetheless commonly applied across many fields.
Fixed-horizon/fixed-sample tests are statistical tests that require the person to determine and commit to the number of samples before starting the experiment.
The question:
Is an NHST, by definition, a fixed-horizon/fixed-sample test?
Alternatively, are the class boundaries (especially for NHST) well-defined enough for this question to be answerable, or perhaps the underlying philosophical considerations are orthogonal and should not be directly compared?
I am looking for references that address this either way (or outright state it is inconclusive).
---
My current understanding:
There is a philosophical conflict on NHST's origins regarding sample size determination.
As a decision procedure, Neyman & Pearson’s hypothesis test requires one to decide on the sample size before starting the experiment. Fisher's significance test, however, does not require pre-determining the sample size and potentially allows one to wait for more data to arrive (as no decision is involved).
It is not 100% clear which of the conflicting stance the NHST hybrid has inherited (or, more precisely, whether the Neyman-Pearson approach has completely wiped out the Fisherian approach on the sample size front).
Views in my field (digital experimentation) seems to agree with such implication, but it is not definitive.
Existing works on alternatives to NHSTs in the context of continuous monitoring (a.k.a. adaptive/optional stopping or optionally increasing the sample size) have made comments of various precision on this matter:
>
NHST is valid for fixed horizon test. But it is known to underestimate Type-I error when continuous monitoring is used. --- Deng, Lu & Chen (2016)
>
However, this practice of unplanned multiple testing [(adaptively increasing the sample size)] is not allowed in the classical NHST paradigm, as it increases Type I error rates. --- Schönbrodt et al. (2017), who also cited Armitage, McPherson, & Rowe (1969)
>
The validity of NHST requires that the sample size is fixed in advance, which is often violated in practice. --- Yu, Lu & Song (2020)
>
The main issue is that an experiment following NHST requires a fixed sample size and therefore a fixed time window, which does not allow repeated significant testing, or “continuous monitoring”. --- Ju et al. (2019)
[Some also used the term Classical/Traditional NHST](https://www.linkedin.com/posts/ronnyk_classical-null-hypothesis-significance-testing-activity-6978148981809840128-Ecs7), perhaps in an attempt to perform more-detailed differentiation. [Johari et al. (2017)](http://library.usc.edu.ph/ACM/KKD%202017/pdfs/p1517.pdf) refer to common test procedures in Web A/B testing as "standard frequentist parameter testing," leaving one wondering whether it is equivalent to NHST at all.
Then there are sequential tests.
[Sequential tests](https://en.wikipedia.org/wiki/Sequential_analysis) are tests with sample sizes not determined in advance. Many sequential tests bear the hallmark of an NHST --- existence of competing hypotheses, a set of decision rules around the test statistic, p-values, and the lack of requirement to specify a prior.
However, it is not universally clear whether (frequentist) sequential tests as part of NHST. If they are, then we have a clear counterexample.
[Deng, Lu & Chen (2016)](https://arxiv.org/pdf/1602.05549.pdf) regarded sequential testing as a "different theory to allow continuous monitoring in NHST framework", while [Schönbrodt et al. (2017)](https://osf.io/w3s3s/) call group sequential tests an "extension of the NHST paradigm". The irony that these quotes may contradict the previous quotes from the same authors is not lost on me.
There are other works on sequential analysis (usually the Bayesian ones) which contrast themselves against NHST, though I see that more as a frequentist vs Bayesian contrast rather than the one this question seeks.
| Is it a must for a Null Hypothesis Significance Test (NHST) to be fixed-horizon/fixed-sample? | CC BY-SA 4.0 | null | 2023-06-02T16:36:32.493 | 2023-06-02T16:36:32.493 | null | null | 239410 | [
"hypothesis-testing",
"statistical-significance",
"references"
] |
617695 | 2 | null | 617681 | 0 | null | It sounds like your end-goal is to come up with a pmf $P(X|Y)$, i.e. for any signal duration Y, what are the probabilities that you get 0 values, 1 value, 2 values, etc.
The only reason to invoke Bayes' theorem for this analysis is if for some reason you know the functional form of $P(Y|X)$. I.e., given the number of distinct values for sensor 1, what is the conditional pdf of signal duration. It doesn't sound to me like you know this.
To that end, I would suggest you just try to model $P(X|Y)$ directly. Seeing as X is an integer which is likely to take a relatively small value, you could (just throwing ideas out) try to model it as a conditional Poisson distribution, that is to say that $X_{i} \sim Poisson(\lambda(Y_{i}))$ and then you'll need to come up with some sensible model for $\lambda$ as a function of Y. You might think that you'd on average expect the number of distinct signals to be proportional to the length of the signal in which case you'd model it as a simple linear. Or you might think it would be some sort of diminishing return and you could model it as a power law or a logarithm.
To work through the simple case (linear without intercept), your log-likelihood would look like
$\sum_{i=1}^{N}\ln \left[\frac{(a\cdot y_{i})^{x_{i}}e^{-a\cdot y_{i}}}{x_{i}!} \right]=\sum_{i=1}^{N}x_{i}\ln a - ay_{i}+ consts$
and you can now use gradient descent or some other optimisation algorithm to find the maximum of this wrt a
The key take-home here is that using Bayes' theorem only helps when you have some information about the inverse problem. For example, if you toss a coin 10 times and it comes up heads 6 times and you want to know the inherent probability of that coin coming up heads, you have more information about the inverse problem. If you knew the inherent probability of a coin coming up heads, then you would know (via the binomial distribution) the probability of seeing 6 heads out of 10 tries. So inverting the problem is helpful.
In your case though, you don't have any inherent understanding of how the number of distinct values depends on the signal duration or how the signal duration depends on the number of distinct values (as far as I can tell anyway), so framing one in terms of the other won't help you/is circular. To make progress, unless you have tonnes of data, or Y doesn't have many distinct values relative to the amount of data you have, you won't be able to model $P(X|Y)$ non-parametrically, so you'll have to make some parametric assumptions about the functional form of the distribution of X|Y, and then you can estimate those parameters via maximum-likelihood (or even calculate the full Bayesian posterior on those parameters)
| null | CC BY-SA 4.0 | null | 2023-06-02T16:39:01.897 | 2023-06-02T16:39:01.897 | null | null | 103003 | null |
617697 | 1 | null | null | 1 | 38 | Many outstanding answers here detail the fundamentals of linear discriminant analysis. These include descriptions of [its use in dimensionality reduction](https://stats.stackexchange.com/questions/48786/algebra-of-lda-fisher-discrimination-power-of-a-variable-and-linear-discriminan/48859#48859), an explanation of [classification using Bayes' rule](https://stats.stackexchange.com/questions/31366/linear-discriminant-analysis-and-bayes-rule-classification), and a description of [within- and between-class scatter matrices](https://stats.stackexchange.com/questions/123490/what-is-the-correct-formula-for-between-class-scatter-matrix-in-lda).
Common methods for measuring importance of discriminants to classification as a whole (eigenvalue and correlation-based) are included in the [first answer](https://stats.stackexchange.com/questions/48786/algebra-of-lda-fisher-discrimination-power-of-a-variable-and-linear-discriminan/48859#48859). These seem to tackle the problem, "How important is discriminant d for separating clusters in general?" I am searching for something slightly different:
How important is discriminant d, given the value of an input point, x, that we would like to classify?
What I've tried so far: iteratively removing discriminants and calculating class probabilities at x using Bayes' rule. Stopping when statistical difference between iteration i and i+1 exceeds a threshold. I would prefer, however, a non-iterative solution that is more similar to the direct calculation of discriminant coefficients.
| Importance of linear discriminants to classification at a given point | CC BY-SA 4.0 | null | 2023-06-02T16:52:26.530 | 2023-06-02T17:54:39.390 | null | null | 389441 | [
"bayesian",
"classification",
"dimensionality-reduction",
"discriminant-analysis"
] |
617698 | 1 | null | null | 1 | 20 | We have single cell data from 12 control and 12 diseased individuals. The data has been integrated (remove batch effects) and clustered. We would like to know if there are any clusters that are enriched/depleted in disease versus control.
I read a paper that said they accomplished this using the fisher exact test. They also had single cell data from multiple donor samples. I do not think this is the best way of doing this. The fisher exact test assumes independence of observations in the contingency table. Because the cells are associated with 24 sample IDs, I do not believe they can be called independent.
I think a better test would be to calculate the cluster proportions of each sample, and perform the wilcoxon rank sum test for each cluster. This has been done in multiple papers, and it accounts for differences in cluster composition across donors. Please help me understand if I am wrong.
| Fisher exact test for enrichment of subpopulations in single cell data | CC BY-SA 4.0 | null | 2023-06-02T17:16:19.140 | 2023-06-02T17:51:57.977 | null | null | 362835 | [
"wilcoxon-mann-whitney-test",
"fishers-exact-test"
] |
617699 | 2 | null | 617637 | 7 | null | Let $X_i$ be independent variables with mean $\mu$ and variance $\sigma^2$. Then the mean and variance of a sum scale like:
$$\begin{array}{rcl}
\text{Mean}\left(\sum_{i=1}^n X_i\right) &=& n\mu\\
\text{Var}\left(\sum_{i=1}^n X_i\right) &=& n\sigma^2
\end{array}$$
So indeed the variance increases and scales like $\propto n$.
But the signal to noise ratio or coefficient of variation does not
$$S/N = \frac{\text{Mean}\left(\sum_{i=1}^n X_i\right)}{\sqrt{\text{Var}\left(\sum_{i=1}^n X_i\right)}} \propto \sqrt{n}$$
$$CV = \frac{\sqrt{\text{Var}\left(\sum_{i=1}^n X_i\right)}}{\text{Mean}\left(\sum_{i=1}^n X_i\right)} \propto \frac{1}{\sqrt{n}}$$
So you get more signal and less (relative) variance.
---
A related question is: [Binomial distribution for gender discrimination?](https://stats.stackexchange.com/questions/547315)
In that question we consider a binomial distribution with increasing sample size (basically a sum of Bernoulli distributed variables). As the mean increase, the standard deviation increases as well, but with a smaller rate.
[](https://i.stack.imgur.com/jSVf6.png)
---
>
I have been told to expect that variance sums, so the combination of multiple highly variable inputs should lead to a many times more variable output
This may occur in a context of error propagation. For example an experiment where we measure the weight of a liter of milk. Then we have an error in measuring the volume of a liter, and in addition an error in measuring the weight. These two errors add up.
>
On the one hand, I have been told to expect that summing multiple noisy inputs should lead to noise reduction for the output
This is like explained above. The noise reduction is in a relative sense. When you sum values then the signal scales with $n$ while the noise scales only with $\sqrt{n}$.
| null | CC BY-SA 4.0 | null | 2023-06-02T17:19:14.997 | 2023-06-02T18:01:22.423 | 2023-06-02T18:01:22.423 | 164061 | 164061 | null |
617700 | 2 | null | 261008 | 0 | null | What you describe is IMHO a simple and effective way to determine what inputs your model is most sensitive to.
However, 'sensitive' is not necessarily the same as 'important'.
For example if your model is very prone to overfitting issues then such a metric could easily lead you in the wrong direction: Those 'highly sensitive inputs' could then actually just mean 'highly important for easy overfitting'.
| null | CC BY-SA 4.0 | null | 2023-06-02T17:23:20.960 | 2023-06-02T17:35:39.927 | 2023-06-02T17:35:39.927 | 389442 | 389442 | null |
617701 | 2 | null | 325156 | 0 | null | The $F$-test considers all data points in one model. On the other hand, tuning a regularization hyperparameter through cross-validation requires data to be left out of the model-training process, data that could have been used to improve coefficient estimates. There are situations where data are not virtually infinite and are too precious to be used for something other than estimating regression coefficients.
Further, classical hypothesis tests are amenable to sample size calculations where you can know how many observations you need to collect to reliably detect an effect of interest. While you might be able to do something like this for regularized regression through simulation, it is not as straightorward.
| null | CC BY-SA 4.0 | null | 2023-06-02T17:28:28.103 | 2023-06-02T17:28:28.103 | null | null | 247274 | null |
617702 | 2 | null | 287431 | 1 | null | >
But linear regression, as well as any other technique I am aware of assumes that the variables are independent of each other.
If by "variables" you mean the "features", then this is not correct for linear regression or most of the other common supervised learning models. Features can be, and often are, deterministic functions of each other, such as using polynomials.
Indeed, what you propose is completely routine. Perhaps the easiest example is a model with an interaction between two features: $\hat y = \hat\beta_0 + \hat\beta_1x_1 + \hat\beta_2x_2 + \hat\beta_3x_1x_2$.
In that regression, you have the original two features, $x_1$ and $x_2$. However, you also have a function (often called a "basis function") of both features to give the $x_1x_2$ term.
You can extend this idea to as many features and basis functions as you want. For instance, the following is a totally legitimate linear regression.
$$
\hat y = \hat\beta_0 + \hat\beta_1x_1 + \hat\beta_2x_2 + \hat\beta_3x_3 + + \hat\beta_4\cos(x_1x_2) + \hat\beta_5x_1^{x_2^{x_3}}
$$
Using your notation, the $y_i$ would be the basis functions of the original features.
| null | CC BY-SA 4.0 | null | 2023-06-02T17:40:21.893 | 2023-06-02T17:40:21.893 | null | null | 247274 | null |
617703 | 2 | null | 617650 | 0 | null | The Cauchy distribution is a special case of the t distribution, with 1 degree of freedom. While JAGS does not have the Cauchy, it does have the t distribution.
`dt(mu, tau, k)`
Just set k equal to 1 and you have a Cauchy prior
`dt(mu, tau, 1)`
| null | CC BY-SA 4.0 | null | 2023-06-02T17:44:10.117 | 2023-06-03T04:26:39.070 | 2023-06-03T04:26:39.070 | 362671 | 389444 | null |
617711 | 2 | null | 255822 | 0 | null | Strictly proper scoring rules are optimized in expected value by the true probabilities.
More informally, optimizing strictly proper scoring rules leads to your model seeking out the true probabilities. This sounds like the exact goal you have: to make probabilistic predictions as close to the true default probabilities as your can.
Two common strictly proper scoring rules are log loss and Brier score. Below, $y=(y_1,\dots,y_n)$ denotes the true observations $(y_i\in\{0, 1\})$, and $p = (p_1,\dots,p_n)$ denotes the predicted probabilities from a model $(p_i\in[0, 1])$.
$$
\text{Log Loss}\\
L(y, p) = -\dfrac{1}{n}\overset{n}{\underset{i=1}{\sum}}\bigg(
y_i\log(p_i) + (1-y_i)\log(1 - p_i)
\bigg)\\
\text{Brier Score}\\
B(y, p) = \dfrac{1}{n}\overset{n}{\underset{i=1}{\sum}}\bigg(
y_i - p_i
\bigg)^2
$$
A complaint about either of these might be that there is no sense of how good a score is, while a typical regression metric of $R^2$ gives a sense of model quality. First, that interpretation of $R^2$ is [not as easy as one might hope](https://stats.stackexchange.com/questions/414349/is-my-model-any-good-based-on-the-diagnostic-metric-r2-auc-accuracy-rmse/414350#414350), so I challenge the idea that any performance metric can be evaluated as "good" or "bad" without a context the way one might like to consider $90\%$ an $\text{A}$ in school while $40\%$ is an $\text{F}$. Second, both Brier score and log loss can be transformed to give $R^2$-style measures of performance. Let $\bar y$ be the proportion of $1$s in your data. [Then McFadden's and Efron's pseudo $R^2$ values are defined as follows](https://stats.oarc.ucla.edu/other/mult-pkg/faq/general/faq-what-are-pseudo-r-squareds/):
\overset{n}{\underset{i=1}{\sum}}\bigg(
y_i\log(p_i) + (1-y_i)\log(1 - p_i)
\bigg)
$$
R^2_{\text{McFadden}} = 1 - \left(\dfrac{
\overset{n}{\underset{i=1}{\sum}}\bigg(
y_i\log(p_i) + (1-y_i)\log(1 - p_i)
\bigg)
}{
\overset{n}{\underset{i=1}{\sum}}\bigg(
y_i\log(\bar y) + (1-y_i)\log(1 - \bar y)
\bigg)
}\right)
\\
R^2_{\text{Efron}} = 1 - \left(\dfrac{
\overset{n}{\underset{i=1}{\sum}}\bigg(
y_i - p_i
\bigg)^2
}{
\overset{n}{\underset{i=1}{\sum}}\bigg(
y_i - \bar y
\bigg)^2
}\right)
$$
As the $\bar y$ can be seen as the [prior](https://stats.stackexchange.com/a/583115/247274) probability of a default, these pseudo $R^2$ values can be seen as comparisons of the values of strictly proper scoring rules achieved by the model compared to the value of that same strictly proper scoring rule that is achieved by predicting the prior probability every time, which is analogous to how the usual $R^2$ can be seen as comparing model predictions of the conditional mean to a model predicting the conditional mean as the marginal mean every time. Think about it this way: if you knew nothing about how your features influenced default probability, what would be your best guess for someone's default default probability if you knew the overall default rate to be $\bar y?$ These pseudo $R^2$ calculations go with the logic that the best naïve approach is to predict $\bar y$ every time.
If you want to do out-of-sample assessments of performance, I have [a strong opinion](https://stats.stackexchange.com/q/590199/247274) about how to do that, a stance that now has [support](https://stats.stackexchange.com/a/616976/247274) in the statistics literature from Hawinkel & Waegeman (2023)!
Finally, if you are checking that lones predicted to default with probabilities between $20\%$ and $30\%$ do default with the claimed probability, you are assessing your model calibration. I think the `sklearn` [documentation](https://scikit-learn.org/stable/modules/calibration.html) gives a good idea of what this means and gives some references. What you are doing by binning into an interval like $[0.2, 0.3]$ is evocative of the Hosmer-Lemeshow test, which [Frank Harrell argues is obsolete](https://stats.stackexchange.com/a/207512/247274) and replaced by techniques like those present in the `sklearn` function or his `rms` package (such as `rms:calibrate`).
I have a few posts on here about probability calibration.
[Probability Calibration of Statistical Models](https://stats.stackexchange.com/a/552151/247274)
[Should I use statistical tests (e.g., Hosmer-Lemeshow) to assess predictive models?](https://stats.stackexchange.com/a/611881/247274)
[classification ML model: probability of positive label knowing the model score](https://stats.stackexchange.com/questions/541101/classification-ml-model-probability-of-positive-label-knowing-the-model-score/611414#611414)
[Walk through rms::calibrate for logistic regression](https://stats.stackexchange.com/questions/561922/walk-through-rmscalibrate-for-logistic-regression)
[Walk through rms::val.prob](https://stats.stackexchange.com/questions/563867/walk-through-rmsval-prob)
REFERENCE
[Hawinkel, Stijn, Willem Waegeman, and Steven Maere. "Out-of-sample R 2: estimation and inference." The American Statistician just-accepted (2023): 1-16.](https://arxiv.org/pdf/2302.05131.pdf)
| null | CC BY-SA 4.0 | null | 2023-06-02T18:01:09.513 | 2023-06-02T18:13:02.867 | 2023-06-02T18:13:02.867 | 247274 | 247274 | null |
617714 | 1 | null | null | 0 | 37 | I've implemented a simple one sample t-test in python:
```
import numpy as np
from math import sqrt
from scipy.stats import t, ttest_1samp
def one_sample_ttest(sample, global_mean, alpha=0.05):
n = sample.size
std = np.std(sample)
mean = np.mean(sample)
serr = std / sqrt(n)
tscore = (mean - global_mean) / serr
pval = (t.sf(x=tscore, df=n-1)) * 2 # alternatively = (1 - t.cdf(x=tstat, df=n-1)) * 2
return tscore, pval
```
However, when comparing the computed t-score with [ttest_1samp from scipy.stats module](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.ttest_1samp.html), I'm getting a different result:
```
data = np.array([5.473, 2.967, 5.337, -1.054])
print(one_sample_ttest(data, 0.0)[0])
print(ttest_1samp(data, 0.0).statistic)
######################
2.4094767021440173
2.08666803388347
```
Through trial-and-error, I've found this is likely caused by the calculation of the standard error in the denominator of the test formula:
$$
t=\frac{\bar{x} - \mu_0}{\frac{s}{\sqrt{n}}}
$$
which becomes (using the number of DoF instead of sample size):
$$
t=\frac{\bar{x} - \mu_0}{\frac{s}{\sqrt{n-1}}}
$$
Is there a particular reason for this? What are the practical implications?
| Calculation of Standard Error (SE) in scipy's implementation of one sample t-test? | CC BY-SA 4.0 | null | 2023-06-02T18:07:15.867 | 2023-06-02T22:40:03.730 | 2023-06-02T22:40:03.730 | 44269 | 388119 | [
"python",
"t-test"
] |
617719 | 2 | null | 617342 | 0 | null | Consider a bivariate zero-mean VAR(1) model
\begin{aligned}
y_{1,t} &= a_{1,1} y_{1,t-1} a_{1,2} y_{2,t-1} + u_{1,t}, \\
y_{2,t} &= a_{2,1} y_{1,t-1} a_{2,2} y_{2,t-1} + u_{2,t}
\end{aligned}
or in matrix notation, $\mathbb{y}_t=A \mathbb{y}_{t-1} + \mathbb{u}_t$.
Both $y_{1,t}$ and $y_{2,t}$ are dependent variables, each in its own equation, so they are explained by the model. Thus, they are endogenous. The lags, on the other hand, might seem to be exogenous, as they are not explained by the model... or are they?
The model applies for every $t$. E.g. for $t=6$,
\begin{aligned}
y_{1,6} &= a_{1,1} y_{1,5} a_{1,2} y_{2,5} + u_{1,6}, \\
y_{2,6} &= a_{2,1} y_{1,5} a_{2,2} y_{2,5} + u_{2,6}
\end{aligned}
and for $t=5$,
\begin{aligned}
y_{1,5} &= a_{1,1} y_{1,4} a_{1,2} y_{2,4} + u_{1,5}, \\
y_{2,5} &= a_{2,1} y_{1,4} a_{2,2} y_{2,4} + u_{2,5}.
\end{aligned}
Thus, if we decrease $t$ by 1, the seemingly exogenous variables $y_{1,5}$ and $y_{2,5}$ turn out to be endogenous, as the model explains them, too. Therefore, all the $y$ variables can be considered endogenous in this model.
(Also, for a specific time period such as $t=6$, $y_{1,5}$ and $y_{2,5}$ are predetermined. This is a useful feature for deriving some desirable properties of least squares or other estimators of the models parameters.)
You can now generalized from the bivariate zero-mean VAR(1) to a $k$-variate VAR($p$) model with nonzero mean, and the same logic goes through.
| null | CC BY-SA 4.0 | null | 2023-06-02T19:24:11.730 | 2023-06-02T19:24:11.730 | null | null | 53690 | null |
617720 | 1 | null | null | 0 | 17 | I have successfully performed an MLE estimation of a model for a 1d stochastic process with another latent stochastic process (that we can interpret as volatility) for which the values are not observable. I now want to design an algorithm to estimate the most likely (or expected) value of the latent process using the information we have about its distribution but also using the observed data.
Let's write this a bit more formally. I will simplify and write my model as a discrete time-series model.
## The context of the model
Assume we have data $X_t$, with $N$ samples, so for the time indices $t \in \{t_1, t_2, \dots, t_N\}$. Let's also write the time increment from one time index to the next to simplify notation as $\delta t$, and let's assume the grid is constant (all time indices are equally spaced).
We now have a model for $X(t)$ that includes a latent process $V(t) > 0$:
$$X(t_n) = f\left(\delta t, X(t_{n-1})\right) + V(t_{n-1})\cdot Z_X(\delta t)$$
$$V(t_n) = g\left(\delta t, V(t_{n-1})\right) + Z_V (\delta t)$$
Our $f$ and $g$ are deterministic functions; if they are linear we can get back an AR1 model, for example. $Z_X$ and $Z_V$ are our noise terms, whose distribution only depends on the time elapsed, so here the distribution doesn't change because we assume an equally spaced grid, so we can write them without the $\delta t$ to simplify, since they are iid for every $t$ index. $Z_V$ only produces positive values, but $Z_X$ doesn't have such a restriction (neither does $X(t)$).
Now, we have already applied a MLE algorithm to fit the parameters for $f$, $g$, $Z_X$, $Z_V$ and also $V(t_1)$ (because this value is not observed and the distribution of $V(t)$ is conditional on the first value.
## What we are missing for forecasting
We now want to use our model to simulate future values $X(t > t_N)$. We want to start our simulation by using our recursive scheme and starting at the latest available data point $X(t_N)$. However, we have no value for $V(t_N)$ because it's a latent variable.
The naive approach here would be to use the expected value $\mathbb{E}\left[V(t_N)|V(t_1)\right]$, since we can calculate this with our parameters. However, we know that there's information contained on the observations of $X$ close to the point $t_N$, so the samples $\{X(t_N), X(t_{N-1}), X(t_{N-2})\dots\}$ should help us obtain a better estimate. We could use close samples up to a number we find reasonable to truncate at, but I think I will first try to write this by using only one time step, and then generalize further if possible.
So, my question is, how can we try to approximate
$$\mathbb{E}[V(t_N)|V(t_1), X(t_N), X(t_{N-1}), \dots] ?$$
I'm not experienced with applying Bayesian methods, but I think we can use the distribution that we already have for $V(t)$ as a prior.
I also thought about somehow including the information that we are supposed to have
$$V(t_n) \sim \frac{X(t_{n+1}) - f(\delta t, X(t_n))}{Z_X}$$
But $Z_X$ here is a random variable that can be zero, so this reciprocal is not well defined.
| Estimating most likely value of latent variable after we have obtained parameters by MLE | CC BY-SA 4.0 | null | 2023-06-02T19:43:33.827 | 2023-06-02T19:55:05.680 | 2023-06-02T19:55:05.680 | 382703 | 382703 | [
"time-series",
"bayesian",
"stochastic-processes",
"latent-variable"
] |
617721 | 2 | null | 617601 | 0 | null | Denote $\begin{bmatrix} \sigma_1 & \cdots & \sigma_k\end{bmatrix}'$ by $d$, $\operatorname{diag}(x_1, \ldots, x_k)$ by $X$. Since $|\Sigma| = |D|^2|A(\rho)| = \prod\limits_{i = 1}^k\sigma_i^2|A(\rho)|$ and $x'D = d'X$, the log-likelihood function is
\begin{align} l(\sigma_1^2, \ldots, \sigma_k^2) &= -\frac{1}{2}\log|\Sigma| - \frac{1}{2}x'DA(\rho)^{-1}Dx \\
&= -\frac{1}{2}\sum_{i = 1}^k\sigma_i^2 - \frac{1}{2}d'XA(\rho)^{-1}Xd + \text{constant}.
\end{align}
It thus follows by the quadratic form differentiation formula $\partial z'Mz/\partial z = 2Mz$ and the chain rule that
\begin{align}
\frac{\partial l(\sigma_1^2, \ldots, \sigma_k^2)}{\partial \sigma_j^2}
&= -\frac{1}{2} - \frac{1}{\sigma_j}e_j'XA(\rho)^{-1}Xd \\
&= -\frac{1}{2} - \frac{1}{\sigma_j}
\begin{bmatrix}0 & \cdots & x_j & \cdots & 0\end{bmatrix}
A(\rho)^{-1}\begin{bmatrix} x_1\sigma_1 \\ \vdots \\ x_k\sigma_k
\end{bmatrix}, \tag{1}
\end{align}
where $e_j$ is the $k$-long column vector with all entries $0$ but the $j$-th entry $0$.
As $A(\rho) = (1 - \rho)I + \rho jj'$, the [Sherman-Morrison formula](https://en.wikipedia.org/wiki/Sherman%E2%80%93Morrison_formula) gives
\begin{align}
A(\rho)^{-1} &= (1 - \rho)^{-1}I + (1 - \rho)^{-2}\frac{\rho jj'}{1 + (1 - \rho)^{-1}\rho k} \\
&= (1 - \rho)^{-1}\left[I + \frac{\rho jj'}{1 + (k - 1)\rho}\right]. \tag{2}
\end{align}
Substituting $(2)$ into $(1)$ yields
\begin{align}
\frac{\partial l(\sigma_1^2, \ldots, \sigma_k^2)}{\partial \sigma_j^2}
= -\frac{1}{2} - \frac{x_j^2}{1 - \rho} -
\frac{x_j\rho}{(1 - \rho)\sigma_j}\frac{\sum_i x_i\rho_i}{1 + (k - 1)\rho}.
\end{align}
| null | CC BY-SA 4.0 | null | 2023-06-02T19:49:43.163 | 2023-06-02T21:47:47.220 | 2023-06-02T21:47:47.220 | 20519 | 20519 | null |
617722 | 1 | null | null | 0 | 11 | For what reason should I think about changing the total score of a scale if I allowed residual covariances for similar sounding items?
If I still work with all of the items should I think about changing the total score?
| How is allowing residual covariances related to changing total score of a scale? | CC BY-SA 4.0 | null | 2023-06-02T19:52:59.477 | 2023-06-02T19:52:59.477 | null | null | 385050 | [
"covariance",
"residuals",
"confirmatory-factor"
] |
617723 | 1 | null | null | 0 | 10 | This is for a simple survey dataset- the client has sample goals on two levels- one for group A vs group B. Another for group X vs group Y. So we have 4 weights affixed to each of these groups. The trouble is a particular respondent could be a part of group A as well as group Y. How can I combine weights for respondents who are a part of multiple groups?
| Multiple weights in survey dataset | CC BY-SA 4.0 | null | 2023-06-02T20:04:45.783 | 2023-06-02T20:04:45.783 | null | null | 389451 | [
"sample-weighting"
] |
617724 | 1 | null | null | 0 | 5 | I am trying to figure out the best approach for my prediction task. I have a dataset with four variables: year ranging from 2010 to 2022, categorical variables $A$ and $B$, and numeric target variable T. I have numeric data that describes each category in $A$ and $B$, and can be used as embeddings for these instead of the raw categories. Not all categories in $A$ and $B$ occur every year, in fact most combinations occur over only one to two years. The average of my target $T$ seems to show a strong increasing trend. The goal of my problem is to predict target $T$ for future years for a new data sample.
The question is: how can I capture the global trends in $T$ over time while predicting using $A$ and $B$?
Time agnostic models like random forests and boosting would capture the dependencies between $A$,$B$ and $T$ but are not known to capture time trends well. On the other hand, since most $A$x$B$ combinations have data for only one year, I am not sure how I would use time sequence based methods like ARIMA or LSTM.
What approach should I take to my problem? Any help would be greatly appreciated!
PS: My test set may contain unseen categories for A and B, so use of the numeric embeddings is a must.
| Trying to capture global time dependences for prediction | CC BY-SA 4.0 | null | 2023-06-02T20:42:33.760 | 2023-06-02T20:42:33.760 | null | null | 236994 | [
"machine-learning",
"time-series",
"forecasting"
] |
617725 | 2 | null | 396315 | 0 | null | Let me also add that you can compute this without using the Numpy library. I am currently working on a journal that requires me to use stick-breaking as a prior for my autoencoder.
>
Method 1:
```
def compute_stick_segment(self, v):
n_dims = v.size()[1]
pi = torch.ones(size=v.size()).to(device)
for idx in range(n_dims):
product = 1
for sub_idx in range(idx):
product *=1-v[:,sub_idx]
pi[:,idx] = v[:,idx] * product
return pi
```
>
Method 2
```
from numpy.testing import assert_almost_equal
def set_v_K_to_one(self, v):
# set Kth fraction v_i,K to one to ensure the stick segments sum to one
if v.ndim > 2:
v = v.squeeze()
v0 = v[:, -1].pow(0).reshape(v.shape[0], 1)
v1 = torch.cat([v[:, :latent_ndims - 1], v0], dim=1)
return v1.to(device)
def get_stick_segments(self, v):
n_samples = v.size()[0]
n_dims = v.size()[1]
pi = torch.zeros((n_samples, n_dims))
for k in range(n_dims):
if k == 0:
pi[:, k] = v[:, k]
else:
pi[:, k] = v[:, k] * torch.stack([(1 - v[:, j]) for j in range(n_dims) if j < k]).prod(axis=0)
# ensure stick segments sum to 1
assert_almost_equal(torch.ones(n_samples), pi.sum(axis=1).detach().numpy(),
decimal=2, err_msg='stick segments do not sum to 1')
return pi.to(device)
#Usage:
z = self.set_v_K_to_one(v)
pi = self.get_stick_segments(z)
```
Any of the methods works fine.
| null | CC BY-SA 4.0 | null | 2023-06-02T21:00:28.670 | 2023-06-02T21:00:28.670 | null | null | 348785 | null |
617726 | 2 | null | 617680 | 1 | null | What you seem to be willing to settle for is an estimate of the [thermal time constant](https://en.wikipedia.org/wiki/Time_constant#Thermal_time_constant) for the building. That's the simplest [lumped-element model for a thermal system](https://en.wikipedia.org/wiki/Lumped-element_model#Thermal_systems). You assume that the thermal mass within the building is well-mixed in terms of temperature with the only resistance to heat flow at the exterior wall, which might be a risky assumption. You at least have the advantage that the temperature changes are relatively small in magnitude (about 2 degrees C), allowing for a simplified model where a first-order approximation like this might work well enough for your purposes.
The results will probably depend on whether ventilation fans are at work in the building even if the heating or air conditioning is off. The legend to your plot suggests that might sometimes be the case. If so, you need to take that into account.
If there is resistance to heat flow within the building (e.g., walls, closed doors, etc.) and there are masses within the building that lose heat slowly (e.g., thick solid walls), then what you will get at best is a model of temperature changes at the location of your temperature sensor, not necessarily representing the building as a whole. Furthermore, my sense is that the simple exponential model based on [Newton's law of cooling](https://en.wikipedia.org/wiki/Newton%27s_law_of_cooling) might not work well, even for that single measurement location, when thermal transfer within the building is slow.
From that perspective, it would make sense to analyze individual time courses separately, restricted to times when heating/cooling systems are off. When you do that, however, you should keep critical information associated with each estimate of the time constant: actual initial building temperature, actual outside temperature, interior fan status, anything else that might affect heat flow. Then you can evaluate whether the "time constant" you are estimating is truly constant, versus systematically changing with interior and exterior temperatures and the status of ventilation fans.
| null | CC BY-SA 4.0 | null | 2023-06-02T21:20:32.693 | 2023-06-02T21:20:32.693 | null | null | 28500 | null |
617727 | 1 | null | null | 0 | 6 | Suppose I have a connected, undirected graph with edge weights, and I do a random walk over it such that the probability of moving from any node u to one of its neighbors v is equal to the weight of edge (u,v) divided by the sum of the weights of the edges incident to u. Is there a closed form for the mean and variance of the number of steps it takes to reach any node j from any other node i on such a walk, with i and j not necessarily neighbors?
| Mean and variance of number of steps to reach one node from another in random walk on weighted, undirected graph | CC BY-SA 4.0 | null | 2023-06-02T21:25:02.677 | 2023-06-02T21:25:02.677 | null | null | 249424 | [
"graph-theory",
"random-walk"
] |
617728 | 1 | null | null | 0 | 7 | Say I ask people via survey, "Pick your one favorite option: A, B, C, or D." What is the proper way to compute pairwise significance between the answer options?
| Right way to compute pairwise significance for a multiple choice survey question? | CC BY-SA 4.0 | null | 2023-06-02T21:40:19.163 | 2023-06-02T21:40:19.163 | null | null | 389455 | [
"statistical-significance",
"survey",
"multinomial-distribution"
] |
617729 | 1 | null | null | 1 | 19 | Theoretically in Bayesian inference one could use one experiment's posterior as another experiment's prior, such that knowledge of the parameters accumulates from $p(\theta) \rightarrow p(\theta|\mathbf{X}_{1}) \rightarrow p(\theta|\mathbf{X}_{1}, \mathbf{X}_{2}) \rightarrow \ldots$
But in practice, of course, this is only possible with conjugate priors that guarantee that posteriors have the same nice parametric form. With MCMC sampling, all you have are empirical distributions. It was [asked](https://stats.stackexchange.com/q/195055) [many](https://stats.stackexchange.com/q/241690) [times](https://stats.stackexchange.com/q/379186) [before](https://stats.stackexchange.com/q/422259) how one could turn posteriors into new priors, but to take it a little further: I was wondering if one could fit a mixture of Gaussians to the empirical distribution of MCMC samples? Because, if I recall correctly, [Gaussian mixtures' "universal approximation" property](https://stats.stackexchange.com/q/446351) should allow them to approximate any probability distribution.
Could this work, or has it even been implemented before? References would be greatly appreciated.
| Could one use mixtures of Gaussians to turn MCMC posterior samples into a new prior? | CC BY-SA 4.0 | null | 2023-06-02T21:51:00.020 | 2023-06-03T02:52:54.357 | 2023-06-03T02:52:54.357 | 71679 | 71679 | [
"bayesian",
"markov-chain-montecarlo",
"posterior",
"gaussian-mixture-distribution",
"approximation"
] |
617730 | 2 | null | 239141 | 0 | null | This is way more basic than the other suggestions, but you could plot ln(Y) against ln(X). The slope of the best-fit line here would be the order of the relationship between X and Y (for instance if the slope of ln(Y) vs ln(X) is a, then the relationship between X and Y is Y = CX^a + D, where C and D are constants.)
To find if the relationship between X and Y is linear, you could do LINEST (Excel function) on the plot of ln(Y) vs. ln(X). This will give you the slope and the uncertainty in the slope. Then take the Z score between 1 and your slope. If the absolute value of the Z score is less than 2, the data is linear, but if it's more than 2, the data is not.
| null | CC BY-SA 4.0 | null | 2023-06-02T21:56:27.200 | 2023-06-02T21:56:27.200 | null | null | 389458 | null |
617731 | 1 | null | null | 0 | 7 | I am training ML regression models to predict financial returns in a high frequency trading environment.
I have 1 time-series of intraday data for 40 years for 1 individual security at the moment.
I have 2 questions:
- I have run a cross-validation (keeping in mind the timestamp when
creating the folds). However, after I obtain the best model and get
predictions on the test data, even if my overall gross returns are
positive, I am trading too often so trading costs eat all my profits
giving me negative returns.
Thus, I need to find a way to select which days to trade based on
the predicted returns. As I cannot apply the cross-sectional
approach typical to low frequency strategies by only taking a
position on the top X% of securities, I need to find a solution.
Instead of fixing an a-priori lower-bound threshold, and trading
only when the predicted returns are higher (in absolute terms), I
would like to add this as an extra parameter to focus only on the
ost profitable trades based on my predictions.
I thought about the following approach:
Run cross-validation to select the best hyperparameter combination, thus obtaining the best model.
retrain the model with the best hyperparameters found in step 1 for each of the k folds (defined in the same way as before) creating
a 'manual' cross-validation approach described as follows:
For each fold, obtain the validation predicted returns, and after giving a list of lower-bound thresholds, compute the sharpe ratio
(the scoring metric in my case) for each of the lower-bound
thresholds.
For each threshold, compute the mean of validation sharpe ratios across all k folds and pick the one with the highest value.
I was wondering whether this reasoning is correct from a ML
perspective, or whether I am increasing significantly the bias and
the second validation should be run on another held-out dataset.
- I would also like to do some feature selection before running the hyperparameter tuning. I know it could be done via nested CV but that would become too cumbersome from a computational time perspective.
Would it be ok to run bootstrap (randomly selecting X observations)
over the whole dataset several times and train each time a Random
forest with default hyperparameters to obtain feature importances.
Then only consider features that were on average in the top 10
positions as predictors in the cross validation? Does bootstrap reduce the bias I would introduce if I would run only 1 Random forest on the whole dataset before doing CV?
| Subsequent cross-validation and Feature selection via bootstrap | CC BY-SA 4.0 | null | 2023-06-02T22:05:08.043 | 2023-06-02T22:05:08.043 | null | null | 389355 | [
"machine-learning",
"cross-validation",
"feature-selection",
"dataset",
"bootstrap"
] |
617732 | 1 | null | null | 0 | 9 | I was using a multivariate gaussian (`mgaussian`) `glmnet` model to solve the multitask learning problem below (deconvolution of a multichannel signal using a known point spread function / blur kernel by regressing the multichannel signal on shifted copies of the point spread function) but I am experiencing poor performance, where the best subset is inferred to be much larger than it actually is (moderately larger if I set `standardize.response` to `FALSE`, much larger if I set `standardize.response` to `TRUE`). The solution is also really sensitive to the value I use for `thresh` (e.g. with 1E-10 I get a completely different, much less sparse & much worse solution). I am not sure in my case if I should set `standardize.response` to `TRUE` or `FALSE` - right now my data of the different channels / tasks can be somewhat different in scale, but despite these differences in scale, the data is simulated to all have the same Gaussian noise SD for the different channels / tasks. Standardizing it would make the scale of the different channels more comparable but would also lead to the noise having different variances for different channels, which in the fitted model would not be taken into account. So I gather I should set `standardize.response` to `FALSE`. Doing so, however, leads to model that is not sparse enough & has too large a support size compared to the true support size. Using `lambda.1se` as `lambda` value improves things a bit, but the result is still not too great. Is this expected with an L2/L1 block norm penalty (group LASSO penalty) as used here (the real sparser best subset solution I suppose one would be interested in here would use an L0/Linfinity block norm penalty)? The question also is whether I could use some information theoretical criterion perhaps to better identify the lambda that corresponds most closely to the sparsest best subset solution & would recover the true support best (e.g. BIC, EBIC or GIC). Finally, am I also correct that `glmnet` right now does not allow observation `weights` to be a matrix to allow different observation weights for different channels / tasks (I would be interested to use `1/(Y+0.1)` as approximate 1/Poisson variance weights to do multitask learning for identity link Poisson with nonnegativity constraints on the coefficients)?
Example code:
```
library(remotes)
remotes::install_github("tomwenseleers/L0glm/L0glm")
library(L0glm)
# simulate blurred multichannel spike train
set.seed(1)
s <- 0.1 # sparsity (% of timepoints where there is a peak)
p <- 500 # 500 variables
# simulate multichannel blurred spike train with Gaussian noise
sd_noise <- 1
sim <- simulate_spike_train(n=p,
p=p,
k=round(s*p), # true support size = 0.1*500 = 50
mean_beta = 10000,
sd_logbeta = 1,
family="gaussian",
sd_noise = sd_noise,
multichannel=TRUE, sparse=TRUE)
X <- sim$X # covariate matrix with shifted copies of point spread function, n x p matrix
Y <- sim$y # multichannel signal (blurred spike train), n x m matrix
colnames(X) = paste0("x", 1:ncol(X)) # NOTE: if colnames of X and Y are not set abess gives an error message, maybe fix this?
colnames(Y) = paste0("y", 1:ncol(Y))
true_coefs <- sim$beta_true # true coefficients
m <- ncol(Y) # nr of tasks
n <- nrow(X) # nr of observations
p <- ncol(X) # nr of independent variables (shifted copies of point spread functions)
W <- 1/(Y+0.1) # approx 1/variance Poisson observation weights with family="poisson", n x m matrix
cvfit <- cv.glmnet(X, Y,
alpha = 1, # LASSO
family = "mgaussian", # group LASSO = model with L1/L2 block norm penalty
nlambda = 100,
nfolds = 5,
standardize = FALSE,
standardize.response = FALSE,
intercept = FALSE,
relax = FALSE,
lower.limits = rep(0, ncol(X)+1)) # impose nonnegativity constraints on coefficients
plot(cvfit)
fit_glmnet <- glmnet(X,
Y,
alpha = 1, # LASSO
family = "mgaussian", # group LASSO = model with L1/L2 block norm penalty
nlambda = 100,
standardize = FALSE,
standardize.response = FALSE,
intercept = FALSE,
relax = FALSE,
lower.limits = rep(0, ncol(X)+1)) # for nonnegativity constraints
# best lambda - I am using cvfit$lambda.1se instead of cvfit$lambda.min to get a slightly sparser model, a bit closer to the ground truth in terms of support
best_lambda <- cvfit$lambda
# get the coefficients for each task for the best lambda value
coefs <- coef(fit_glmnet, s = best_lambda)
beta_mgaussian_glmnet <- do.call(cbind, lapply(seq_len(m), function (channel) as.matrix(coefs[[channel]][-1,,drop=F])))
beta_mgaussian_glmnet[abs(beta_mgaussian_glmnet)<0.01] = 0 # slight amount of thresholding of small coefficients
image(x=1:nrow(sim$y), y=1:ncol(sim$y), z=beta_mgaussian_glmnet^0.01, col = topo.colors(255),
useRaster=TRUE,
xlab="Time", ylab="Channel", main="nonnegative group Lasso glmnet (red=true support)")
abline(v=(1:nrow(sim$X))[as.vector(rowMax(sim$beta_true)!=0)], col="red")
# abline(v=(1:nrow(sim$X))[as.vector(rowMin(beta_mgaussian_glmnet)!=0)], col="cyan")
sum(rowMax(sim$beta_true)!=0) # 50 true peaks
sum(rowMin(beta_mgaussian_glmnet)!=0) # 72 peaks detected - too large...
```
[](https://i.stack.imgur.com/X401D.png)
[](https://i.stack.imgur.com/tO90u.png)
| Tuning lambda in glmnet mgaussian multitask learning model for optimal support recovery | CC BY-SA 4.0 | null | 2023-06-02T22:18:37.487 | 2023-06-03T11:43:06.693 | 2023-06-03T11:43:06.693 | 27340 | 27340 | [
"lasso",
"glmnet",
"multitask-learning",
"group-lasso"
] |
617735 | 1 | null | null | 0 | 13 | [](https://i.stack.imgur.com/k3gci.png)
For the first question I calculate the sample correlation coefficient, so r=0.3987866 then I calculate t=1.229972759 and as |t|<2.306 (which is the t-critical) we don't reject the null hypothesis (H0) at the 5% level. Therefore, there is no significant dependance between the lead and iron levels.
For question (b), we need |t|>2.306 in order to reject the null hypothesis at the 5% level and a sample size n=10. At the end I found |r|>0.6319 so the hypothetical values of the sample correlation are inside [-1,-0.6319]U[0.6319,1].
Are my answers correct? Also, what is your intuition for this test?
| When to reject the null of no significant dependence between two variables? | CC BY-SA 4.0 | null | 2023-06-02T22:56:27.957 | 2023-06-02T23:05:03.193 | 2023-06-02T23:05:03.193 | 362671 | 389460 | [
"hypothesis-testing",
"self-study"
] |
617736 | 1 | 617748 | null | 0 | 17 | I have two one-dimensional samples that I'm trying to quantifiably distinguish (or deny such distinction). I.e. the null-hypothesis is that they come from the same population (distribution?). The alternative is that they don't. So after some reading I figured that K-S test is what I need. In order to implement it, I'm following the general instructions [here](https://www.itl.nist.gov/div898/software/dataplot/refman1/auxillar/ks2samp.htm) (as well as [wiki](https://en.wikipedia.org/wiki/Kolmogorov%E2%80%93Smirnov_test#Two-sample_Kolmogorov%E2%80%93Smirnov_test)).
I compute the distribution functions as the first link shows (number of members of each sample with smaller value than the one that we currently have on X axis). The result I get can be seen in the picture: [](https://i.stack.imgur.com/50kfg.png)
Then it gets a bit confusing: do I calculate the test statistic as simply the maximum "vertical" distance between these distribution functions?
I.e. I build the list of all the Y differences (for every X) as my test statistic, and in the end choose the maximum of these as the end result (to be evaluated against a test)?
In my case such difference would look like the second picture: [](https://i.stack.imgur.com/CsZ37.png)
So I'm having 9 as the resulting value.
The reason I'm confused is that if I'm to believe wiki, 9 (my result) is much higher than the test stat. I have n = m = 137 (both samples have 137 elements, even though they represent independent events), so the square root turns into measly 0.12, hence even at crazy 0.1% significance level my stats refute the null with flying colors. In fact, I could have boosted my significance level all the way down to Exp(-11097) - yeah, that's minus eleven thousandth power of e...
This is more than suspicious. Hence I want to make sure that I'm doing everything correct. Or maybe I am correct, but the test itself is unfit for the situation, as it is clearly too prone for type I error for my situation.
Then maybe any advice for good alternatives?
| Is two-sample Kolmogorov test working correctly? | CC BY-SA 4.0 | null | 2023-06-02T23:04:29.993 | 2023-06-03T06:39:44.023 | 2023-06-02T23:06:59.843 | 327837 | 327837 | [
"hypothesis-testing",
"statistical-significance",
"kolmogorov-smirnov-test",
"two-sample"
] |
617737 | 2 | null | 617637 | 0 | null | There are two key points:
- The total noise grows in the sum, but it shrinks in the average.
- Intuitively, noise is measured by standard deviation, not the variance, so the noise grows more slowly than the sum does.
---
Let's flip a fair coin $n$ times and look at the sum (total number of heads):
|$n$ |95% confidence |standard deviation |variance |
|---|--------------|------------------|--------|
|$100$ |$50 \pm 10$ |$5$ |$25$ |
|$1000$ |$500 \pm 31$ |$16$ |$250$ |
|$10000$ |$5000 \pm 98$ |$50$ |$2500$ |
|$100000$ |$50000 \pm 310$ |$158$ |$25000$ |
|$1000000$ |$500000 \pm 980$ |$500$ |$250000$ |
Notice the variance is growing fast, but the noise isn't (however, it is growing).
---
Now let's look at the average:
|$n$ |95% confidence |standard deviation |variance |
|---|--------------|------------------|--------|
|$100$ |$0.5 \pm 0.1$ |$0.05$ |$0.0025$ |
|$1000$ |$0.5 \pm 0.031$ |$0.016$ |$0.00025$ |
|$10000$ |$0.5 \pm 0.0098$ |$0.005$ |$0.000025$ |
|$100000$ |$0.5 \pm 0.0031$ |$0.0016$ |$0.0000025$ |
|$1000000$ |$0.5 \pm 0.00098$ |$0.0005$ |$0.00000025$ |
| null | CC BY-SA 4.0 | null | 2023-06-02T23:36:52.253 | 2023-06-02T23:36:52.253 | null | null | 70612 | null |
617738 | 2 | null | 617624 | 0 | null | This turned out just to be an oversight in my code. When calculating fitting the adaptive lasso model to my training data set in the prediction interval function, I forgot to pass the tuning parameters for the coefficient penalties and the lambda value to `glmnet().` As soon as I included those tuning parameters, I got the same results as OLS. Here is the corrected code:
```
##Loading Necessary Packages##
if(!require(glmnet)){
install.packages("glmnet")
}
## Bootstrapped Prediction Interval Function ##
pred_interval<-function(x0, x_train, y_train, lambda, w, alpha=1, alpha_thresh = 0.05){
require(glmnet)
n <- nrow(x_train)
b <- 50
val_resids<-NULL
bootstrap_preds<-NULL
for (i in 1:b){
N<-c(1:n)
train_ids <- sample(N, n, replace=TRUE)
val_ids <- N[-train_ids]
alasso <- glmnet(x_train[train_ids,], y_train[train_ids], alpha=alpha, lambda=lambda penalty.factor=w)
preds <- predict(alasso, x_train[val_ids,])
val_resids[[i]] <- y_train[val_ids] - preds
bootstrap_preds[[i]] <- predict(alasso, x0)
}
bootstrap_preds <- do.call(rbind, bootstrap_preds)
bootstrap_preds1 <- bootstrap_preds - mean(bootstrap_preds)
val_resids <-do.call(rbind, val_resids)
alasso<-glmnet(x_train, y_train, alpha=alpha, lambda=lambda, penalty.factor=w)
preds<-predict(alasso, x_train)
train_resids<-y_train - preds
val_resids<-quantile(val_resids, seq(0,1,0.01))
train_resids<-quantile(train_resids, seq(0,1,0.01))
no_info_error <- mean(abs(sample(y_train, n, replace=FALSE)/sample(preds, n, replace=FALSE)))
generalization <- abs(mean(val_resids) - mean(train_resids))
no_info_val <- abs(no_info_error - train_resids)
rel_overfit<-mean(generalization/no_info_val)
weight <- .632 /(1-.368 * rel_overfit)
residual <- (1 - weight)*train_resids + weight*val_resids
k <- 1
C <- NULL
for (m in 1:length(bootstrap_preds1)){
for(o in 1:length(residual)){
C[k]<-bootstrap_preds1[m]+residual[o]
k<-k+1
}
}
qs <- c(alpha_thresh / 2, (1 - alpha_thresh / 2))
pred_qs<-quantile(C, qs)+mean(bootstrap_preds)
return(cbind(mean(bootstrap_preds),pred_qs))
}
##Fake Training Data##
set.seed(63)#For Reproducibility
x1<-runif(100, 30, 50)
x2<-runif(100, 23, 150)
x3<-runif(100, 400, 1500)
x4<-runif(100, 56, 123)
x5<-runif(100, 3, 12)
e<-rnorm(100, 10, 5) #
Y<-15.3+2.1*x1 + 6.3*x2 + 1.5*x4 +e
X<-data.frame(x1,x2,x3,x4,x5)
dtf<-data.frame(cbind(Y,X))
##Running Adaptive Lasso to get tuning parameter values##
mod.full<-lm(Y~., data=dtf)
w<-1/abs(matrix(mod.full$coefficients[-c(1)]))
alss<-cv.glmnet(x=as.matrix(X), y=Y, alpha=1, family="gaussian", penalty.factor=w)
##Fake Prediction Data##
set.seed(12)#To get a reproducible new dataset
x1<-runif(100, 30, 50)
x2<-runif(100, 23, 150)
x3<-runif(100, 400, 1500)
x4<-runif(100, 56, 123)
x5<-runif(100, 3, 12)
x0<-data.frame(x1,x2,x3,x4,x5)
##Calculating Bootstrapped Prediction interval using Adaptive Lasso##
out<-apply(x0, 1, pred_interval, x_train=as.matrix(X), y_train=Y, w=w, lambda=min(alss$lambda))
useit<-t(out)
testit<-15.3+2.1*x0[,1] + 6.3*x0[,2] + 1.5*x0[,4] +e
##Calculating prediction interval from OLS##
detm<-predict(mod.full, x0, level=0.95,interval="prediction")
##Plotting Results##
plot(useit[,2], testit)
points(useit[,3], testit, col="red") #Bootstrapped Lower Bound
points(useit[,4], testit, col="red") #Bootstrapped Upper Bound
points(detm[,2], testit, col="blue") #OLS Lower Bound
points(detm[,3], testit, col="blue") #OLS Upper Bound
```
```
| null | CC BY-SA 4.0 | null | 2023-06-03T00:23:45.257 | 2023-06-03T00:23:45.257 | null | null | 354118 | null |
617739 | 1 | null | null | 0 | 22 | I am very excited to be on this forum. I am new to biostatistics and have a question regarding my case-control study, for which I am using STATA for data analysis.
All my patients are diabetic. Some are on Metformin, and some are not. Likewise, some have respiratory infections, and some don't. My cases are diabetic patients with respiratory infection, and controls are the diabetic patients without respiratory infection. The exposure variable is Metformin.
I need to clarify if I am right or wrong about the following:
For this type of study, if I do a case-control odds ratio, it will be a crude or unadjusted odds ratio because STATA doesn't know if all these patients have diabetes. Correct? (In STATA, the case variable is Respiratory infection, and the Exposed variable is Metformin.)
In order to tell STATA I am doing it for patients who have diabetes, I would have to do a logistic regression with respiratory infection as the dependent variable and Metformin and diabetes as independent variables. It will also give me an adjusted odds ratio. Correct?
| Case-control study, odds ratio, logistic regression in Stata | CC BY-SA 4.0 | null | 2023-06-03T00:53:54.917 | 2023-06-03T08:38:32.240 | 2023-06-03T08:38:32.240 | 53690 | 389464 | [
"logistic",
"stata",
"odds-ratio",
"case-control-study"
] |
617740 | 2 | null | 617682 | 0 | null | Leaving one out is a creative solution, but unfortunately, it cannot help you to guess the form of the distribution of $\bar{X}$, because you will generate values that are equal to $(n\bar{X}-X_i)/(n-1)$, essentially a constant, $n\cdot \bar{X}$, minus the original sample, the whole thing scaled by $1/(n-1)$. So it will still look trimodal.
The idea to check artificially generated outputs of $\bar{X}$ is however good!
You can do something very closely related: simple bootstrapping, i.e. resampling your sample with replacement.
With such large sample sizes as you have, you should in most cases be on the safe side, unless you can spot that the data produce extreme outliers (looking at a boxplot can give an idea).
If you have R, you can experiment with following R code to get an impression:
```
# function to simulate from a nasty trimodal distribution
rtrimod <- function(n) rlnorm(n) + 10 * sample(0:2, size = n, replace=TRUE)
# sample size: 200
n <- 200
x <- rtrimod(n)
hist(x, breaks = 50)
# first we simulate the real sampling distribution
nsimul <- 10000
xbar <- numeric(nsimul) # reserve space
for (i in 1:nsimul)
xbar[i] <- mean(rtrimod(n))
hist(xbar, breaks = 50)
qqnorm(xbar, main = "qqnorm of sample means")
# techniques to try out
# leave one out:
xminus <- numeric(n)
for (i in 1:n)
xminus[i] <- mean(x[-i])
hist(xminus, breaks = 50)
qqnorm(xminus)
# now bootstrap
nboot <- 10000
xboot <- numeric(nboot)
for (i in 1:nboot)
xboot[i] <- mean(sample(x, replace = TRUE))
hist(xboot, breaks = 50)
qqnorm(xboot, main = "qqnorm of bootstrap")
```
p.s. Pythonists: in R, `qqnorm` produces a Q-Q plot against a standard normal. This does not matter, since the `qqnorm` plot of any other large enough normal sample will look nicely linear, just not like the identity line, but with abscissa $\mu$ and slope $\sigma$.
p.p.s: [Automatically](https://www.codeconvert.ai/r-to-python-converter) translated code for python (not checked):
```
import numpy as np
import scipy.stats as stats
def rtrimod(n):
return np.random.lognormal(size=n) + 10 * np.random.choice([0, 1, 2], size=n)
n = 200
x = rtrimod(n)
_ = plt.hist(x, bins=50)
nsimul = 10000
xbar = np.zeros(nsimul)
for i in range(nsimul):
xbar[i] = np.mean(rtrimod(n))
_ = plt.hist(xbar, bins=50)
_ = stats.probplot(xbar, plot=plt)
xminus = np.zeros(n)
for i in range(n):
xminus[i] = np.mean(np.delete(x, i))
_ = plt.hist(xminus, bins=50)
_ = stats.probplot(xminus, plot=plt)
nboot = 10000
xboot = np.zeros(nboot)
for i in range(nboot):
xboot[i] = np.mean(np.random.choice(x, size=n, replace=True))
_ = plt.hist(xboot, bins=50)
_ = stats.probplot(xboot, plot=plt)
```
| null | CC BY-SA 4.0 | null | 2023-06-03T01:06:57.730 | 2023-06-03T11:36:30.080 | 2023-06-03T11:36:30.080 | 237561 | 237561 | null |
617741 | 2 | null | 349138 | 1 | null | This answer applies to finetuning pretrained transformer models in NLP, but not computer vision.
Contrary to Ng's advice, and somewhat contrary to the currently accepted answer, it's standard practice to fine-tune the entire transformer, more-or-less regardless of the amount of training data. See the [standard text classification tutorial](https://huggingface.co/docs/transformers/tasks/sequence_classification), for example. A more compelling example is that [SetFit](https://github.com/huggingface/setfit)1 achieves excellent accuracy on many few-shot text classification benchmarks after finetuning all 100M+ parameters of a transformer model using as few as 50 observations.
Some notes before presenting experiments:
- None of the training algorithms mentioned in this answer rely on layer-wise learning rates, in case you were concerned about that. As usual, the learning rate + scheduler is just another hyperparameter you tune based on folklore and experiments.
- In all of the experiments, unfreezing a transformer's attention block is phrased as unfreezing a "layer". An attention block technically contains two big layers (in the strictest sense of the word) and many many weight matrices.
Here are 2 mini empirical analyses which contain plots where the x-axis is the # of frozen encoder or decoder layers and the y-axis is accuracy:
- The first GPT paper2: see the left plot of Figure 2
The paper doesn't vary training sizes for that plot, so it's hard to say how affordable different amounts of unfreezing are for a smaller training set.
[](https://i.stack.imgur.com/fuYXu.png)
- This blog post for BERT
Interestingly, there doesn't appear to be a strong interaction effect of # unfrozen layers and training set size on accuracy; you can unfreeze somewhat liberally.
Unfortunately, the blog post doesn't contain training scores, so it doesn't provide evidence that more unfreezing causes greater complexity. The GPT paper does provide this evidence. And in my experience training transformers for classification and similarity tasks, this has been the case.
The plots are slightly dubious to me because it looks like freezing all 12 BERT encoder blocks (except presumably the tacked-on linear layer) gets majority accuracy, i.e., nothing is really learned. Typically, freezing all of the encoder blocks does not perform this terribly. More on that later.
(From the blog post) SST-2 benchmark:
[](https://i.stack.imgur.com/FPbEw.png)
(From the blog post) CoLA benchmark:
[](https://i.stack.imgur.com/GQLnT.png)
Going even further, there's [evidence](https://arxiv.org/abs/2006.05987)3 that re-initializing some of BERT's attention blocks before training improves performance, even with just a few thousand training observations:
[](https://i.stack.imgur.com/mExFi.png)
In other words, intentionally forgetting some of what was learned during pretraining can improve performance on the target task. So don't be too concerned about seemingly immodest increases in variance / decreases in bias, as the accepted answer may lead you to believe. These quantities are not intuitive for modern NNs. You have to run experiments.
(That paper is probably the most thorough analysis of BERT fine-tuning that I've seen. You may find other experiments in there to be insightful.)
It's also important to not just count layers when thinking about complexity; pay attention (pun intended) to what the layers are doing. When classifying text using transformers, a linear layer is tacked on to a pooled or specifically chosen output from the pretrained model, which consists of many attention blocks which do the heavy lifting. Freezing all but the linear layer may do fine. But freezing all but the linear layer and the last attention block may end up doing significantly better, as the step in model complexity is significant. Empirically, freezing subsequent attention blocks can yield diminishing returns.
Finally addressing your question:
>
Is unfreezing more layers always better
Yes for modern NLP transformer models. There aren't many caveats to that answer, which is indeed surprising.
But keep in mind that you can save a great deal of training time and memory at little-to-no statistical cost by unfreezing fewer layers. Here's a passage from the [original BERT paper](https://arxiv.org/abs/1810.04805)4 re an experiment where they don't finetune BERT at all. They instead use it as a feature extractor for a named entity recognition task:
>
. . . we apply the feature-based approach by extracting the activations from one or more layers without fine-tuning any parameters of BERT. These contextual embeddings are used as input to a randomly initialized two-layer 768-dimensional BiLSTM before the classification layer. ¶
Results are presented in Table 7. BERTLARGE performs competitively with state-of-the-art methods. The best performing method concatenates the token representations from the top four hidden layers of the pre-trained Transformer, which is only 0.3 F1 behind fine-tuning the entire model. This demonstrates that BERT is effective for both finetuning and feature-based approaches.
Based on my own classification experiments, you don't even need to train a BiLSTM on BERT features to compete with finetuning BERT. Fitting $l_2$ logistic regression on mean-pooled token embeddings (or the `[CLS]` token embedding for BERT, or the [last token embedding](https://stats.stackexchange.com/q/574483/337906) for autoregressive models) from the last attention block is a statistically stable and CPU-friendly baseline. Feature extraction approaches are also great for ML applications where you need to run a suite of classifiers for each input, as you can share the output of a single model's forward pass. Because of these benefits, I wouldn't be too keen on unfreezing layers for simpler tasks.
## References
- Tunstall, Lewis, et al. "Efficient Few-Shot Learning Without Prompts." arXiv preprint arXiv:2209.11055 (2022).
- Radford, Alec, et al. "Improving language understanding by generative pre-training." (2018).
- Zhang, Tianyi, et al. "Revisiting few-sample BERT fine-tuning." arXiv preprint arXiv:2006.05987 (2020).
- Devlin, Jacob, et al. "Bert: Pre-training of deep bidirectional transformers for language understanding." arXiv preprint arXiv:1810.04805 (2018).
| null | CC BY-SA 4.0 | null | 2023-06-03T01:19:10.730 | 2023-06-03T14:18:39.963 | 2023-06-03T14:18:39.963 | 337906 | 337906 | null |
617742 | 2 | null | 616069 | 1 | null | >
Is there any way to estimate the optimal number of models you should try, before this model selection process becomes counterproductive?
A rather wonky and conservative guess can be made based on my answer [here](https://stats.stackexchange.com/a/570680/337906). Define:
- $n$ is the number of independent observations which you evaluate on, e.g., the size of the validation set
- $\epsilon$ is the maximum acceptable difference between the best model's estimated error rate and its true error rate
- $p'$ is the maximum acceptable probability that your best model's error rate estimator misses the true error rate by more than $\epsilon$
- $m$ is the pre-specified number of models to evaluate on the validation set
then, a conservative upper bound on $m$ is:
$$
\begin{equation*}
\text{floor} \bigg( { \frac{1}{2} p' \exp(2 \epsilon^2 n) } \bigg).
\end{equation*}
$$
I'm not saying that this formula is useful in actual ML workflows. But it reveals a few things about how the maximum number of models you can evaluate scales wrt a few important quantities, which is useful info.
| null | CC BY-SA 4.0 | null | 2023-06-03T01:54:12.240 | 2023-06-03T01:54:12.240 | null | null | 337906 | null |
617743 | 1 | null | null | 0 | 7 | I am using the Chow test to analyze structural change in time series data potentially caused by a significant event. Since each data point represents a day, the data is broken into two chunks:
Chunk 1: All data occurring before the event
Chunk 2: All data occurring on and after the event.
This is working fine, but I also want to analyze potential changes in the 30-day period following the significant event. In other words, I want to use a cutoff of 30 days rather than a single day, if that makes sense.
So far, I have tried to create a larger period cutoff by setting the beginning of the second chunk to 30 days after the event. This means the data is broken into two chunks:
Chunk 1: All data occurring before the event
Chunk 2: All data occurring 30 days later and beyond
To my understanding, this excludes data occurring in days 1-29. Assuming all of this makes sense, is this a valid way to apply the Chow test using a period cutoff rather than a single-day cutoff? If not, is there a better way to use a period cutoff? I came across this post, but it seems to deal with automatically detecting breakpoints rather than using a priori breakpoints.
| Period cutoff in Chow test | CC BY-SA 4.0 | null | 2023-06-03T02:15:22.880 | 2023-06-03T02:15:22.880 | null | null | 389466 | [
"time-series",
"hypothesis-testing",
"exploratory-data-analysis",
"change-point",
"chow-test"
] |
617744 | 1 | null | null | 0 | 5 | I am watching a tutorial on using mel spectrograms to classify the audio's genre via CNN. My question is why apply local min-max normalization to each individual mel spectrogram? What I mean by local is that the min and max value is calculated from the individual mel spectrogram and then min-max normalization is applied; thus, you have to get min and max for each mel spectrogram and then apply the min-max normalization based on its own min and max.
Why apply this local min-max normalization and not take into consideration of the whole training sets min and max first, then apply the normalization. Also, why not do Standardization (Z-score Normalization)?
| Per Mel Spectrogram min-max normalization vs full training set min-max normalization for CNN classification of audio | CC BY-SA 4.0 | null | 2023-06-03T02:43:57.473 | 2023-06-03T02:43:57.473 | null | null | 389468 | [
"neural-networks",
"classification",
"normalization",
"standardization",
"audio"
] |
617746 | 1 | 617747 | null | 0 | 12 | This might be a stupid question, so bear with me.
I was wondering if embeddings can be used to anonymize input text.
I couldn't find any information online that says that embeddings can be 1:1 decoded back to the original text.
An example:
A user wants to check some metadata about a query with an external API, but doesn't want the exact text input to be sent to the API (it might contain sensitive information).
Can he send the embeddings to the API, assuming the API can do meaningful checks on the embeddings - so the API will not have access to the original text?
| Using embeddings to anonymize information | CC BY-SA 4.0 | null | 2023-06-03T03:59:56.933 | 2023-06-03T06:05:41.070 | null | null | 389471 | [
"word-embeddings",
"llm"
] |
617747 | 2 | null | 617746 | 0 | null | Embeddings encode the data but do nothing to encrypt it and give no guarantees of security. If you want to encrypt the data but be able to do mathematical operations you can use something like [homomorphic encryption](https://en.wikipedia.org/wiki/Homomorphic_encryption). With embeddings used for this purpose, the attacker could backward engineer them and deduce the encoded text.
If you want to know more on why this is a bad idea and how homomorphic encryption helps, [https://crypto.stackexchange.com/](https://crypto.stackexchange.com/) would be a better place to ask.
| null | CC BY-SA 4.0 | null | 2023-06-03T06:05:41.070 | 2023-06-03T06:05:41.070 | null | null | 35989 | null |
617748 | 2 | null | 617736 | 2 | null | The K-S test is defined in terms of the maximum difference between cumulative distribution functions. The CDF is defined as $F_X(t)= P(X\leq t)$, so its range is from 0 to 1, not 0 to the sample size, as you have. Your link has iut defined correctly, as a fraction with sample size in the denominator.
You need to divide your distribution functions by their respective sample sizes to get them into the interval $[0,1]$
Your difference should be 9/137, not 9.
| null | CC BY-SA 4.0 | null | 2023-06-03T06:39:44.023 | 2023-06-03T06:39:44.023 | null | null | 249135 | null |
617749 | 1 | null | null | 1 | 7 | Definition: I have conducted research on EEG signal classification, specifically focusing on distinguishing between two different classes using raw EEG signals. Data availability poses a significant challenge in the EEG domain, which necessitates the implementation of data augmentation techniques. In my case, I have applied additive Gaussian noise with zero mean and varying standard deviations (σ∈{0.1,0.01,0.001}) to the raw EEG signals for data augmentation. Additionally, I have considered the magnification factor (m∈{1,2,3}) for the additive noise. By augmenting my training data using different combinations of m and σ, I have observed an improvement in test set accuracy in most cases.
Question: Considering the training data as X_train, the augmented data as X_train_aug, and the test data as X_test, I would like to determine if there exists a mathematical relationship between (X_train, X_test) and (X_train_aug, X_test) that can explain the observed improvement. Are there any criteria available for measuring the relationship between these variables that can help elucidate the results?
Thanks in advance.
| Investigating the Impact of Additive Gaussian Noise on EEG Signal Classification: Analyzing the Relationship between Augmented and Original Data | CC BY-SA 4.0 | null | 2023-06-03T06:49:45.543 | 2023-06-03T06:49:45.543 | null | null | 389477 | [
"time-series",
"classification",
"white-noise",
"data-augmentation"
] |
617751 | 1 | null | null | 0 | 5 | If $\int|t\varphi(t)|\mathrm{d}t < \infty$, where $\varphi(t)$ is the characteristic function of a random variable $X$, does $X$ have continuous density?
My current thinking is: not necessarily. Multiplying by $t$ might remove a singularity from $\varphi(t)$. If that were so, then there may be an example of $\int|\varphi(t)|\mathrm{d}t = \infty$ and so $X$ wouldn't necessarily have continuous density.
But I can't think of a characteristic function that a) has a singularity at zero and b) when the absolute value of the product of it and $t$ is the integrand, the integral converges.
Is the question's answer actually 'yes'?
| If the integral of the product of t and phi(t) converges, does the random variable have continuous density? | CC BY-SA 4.0 | null | 2023-06-03T08:41:28.797 | 2023-06-03T08:41:28.797 | null | null | 364080 | [
"probability",
"random-variable",
"characteristic-function"
] |
617752 | 1 | null | null | 2 | 86 | I did an experiment where I have ten different treatments, with six replicates each. I noted the survival of my animals from Day 4 to Day 12. Each replicate had 12 animals so I noted how many survived each day. I don't have individual data for each animal. So my data looks something like this:
```
Treatment Replicate Survival_D4 ... Survival_D12
Control 1 12 12
Control 2 11 11
Control 3 12 4
Control 4 9 6
Control 5 10 8
Control 6 11 12
Treatment 1 1 12 12
...
Treatment 9 6 12 8
```
My supervisor recommended a Kaplan Meier Survival Analysis, however, I don't have the 0-1 data for status.
I calculated the cumulative survival probability for each replicate and was able to make a plot.
I did this by calculating the survival probability each day by dividing the number of survivors by 12 (since I started with 12 animals). Then for the cumulative survival probability, I multiplied the survival probability with the cumulative survival probability of the day before.
But I want to know whether there are significant differences between my treatments. I want to see whether f.e. a certain treatment leads to faster death.
So my question is, can I still use Kaplan Meier and how would I do so?
If I just use my own cumulative survival probability, how would I check for a significant difference?
Also, I averaged the cumulative survival probabilities of my replicates so that I only have one value per day for each treatment. Is this okay to do?
| Kaplan Meier but no individuals' data | CC BY-SA 4.0 | null | 2023-06-03T08:59:22.603 | 2023-06-03T15:07:28.223 | 2023-06-03T11:51:22.653 | 369002 | 389437 | [
"survival",
"kaplan-meier"
] |
617753 | 1 | null | null | 1 | 14 | I would like to do some feature selection before running the hyperparameter tuning.
I know it could be done via nested CV but that would become too cumbersome from a computational time perspective.
Would it be ok to:
- perform a train test split
- run bootstrap (randomly selecting X observations) over the whole training dataset several times and train each time a Random forest with default hyperparameters to obtain feature importances.
- Only consider features that were on average in the top 10 positions as predictors, and proceed with the cross validation for hyperparameter tuning done by creating folds within the training dataset.
- pick the hyperparameter combination that yields the best average performance over the validation sets (created within the training set)
- retrain the model on the whole training set using the selected features and hyperparameters and obtain the final test accuracy on the hold out test set.
Does bootstrap reduce the bias I would introduce if I would run only 1 Random forest on the whole training dataset before doing CV? or is it still significantly overfitting to the training data?
| Feature selection via bootstrap before CV | CC BY-SA 4.0 | null | 2023-06-03T09:57:08.477 | 2023-06-03T10:00:49.190 | 2023-06-03T10:00:49.190 | 389355 | 389355 | [
"machine-learning",
"cross-validation",
"feature-selection",
"bootstrap",
"importance"
] |
617754 | 1 | null | null | 1 | 13 | I want to calculate the standard error (SE) of regression slope.
First off, wikipedia says SE is the square root of slope estimator variance or in other words:
$$SE(\hat{\beta}) = \sqrt{Var(\hat{\beta})} = \sqrt{\frac{\sum_{i=1}^n (y_i - \hat{\beta} x_i)^2}{(n-1) \sum_{i=1}^n x_i^2}}$$
but one of my professors says SE is the normalized variance of regression slope estimator or $\sigma\big/\sqrt{n}$
which one is true?
and also, in deriving variance of $\hat{\beta}$, we have $\frac{\sigma^2}{\sum x^2}$ where $\sigma^2$ is variance of errors. ($y = \beta x + \epsilon$)
how to derive the equation for variance of errors? I cannot determine the defree of freedom in here.
is it true to say:
$$Var(\epsilon) = \frac{\sum_{i=1}^n (y_i - \hat{\beta} x_i)^2}{n-1}$$
as $\epsilon = y -\beta x$ and $\mathbb{E}[\epsilon] = 0$
and as my final question, I wanted to write $\hat{y_i}$ in terms of a linear combination of $y_i$ but have no idea. I would be thankful if you give me some idea to start.
| standard error of regression slope estimator | CC BY-SA 4.0 | null | 2023-06-03T09:57:52.427 | 2023-06-03T09:57:52.427 | null | null | 389492 | [
"regression",
"variance",
"standard-error"
] |
617755 | 1 | null | null | 0 | 6 | I'm currently learning Convolutional Neural Networks and am stuck on trying to figure out how to compute gradients in a layer that uses transposed convolution. Also, how do I calculate the gradients if I use padding=1 and stride=2?
Thanks to this article "https://hideyukiinada.github.io/cnn_backprop_strides2.html" I was able to figure out how to calculate gradients in normal convolution and all that remains for me is to figure out how to calculate them in transposed convolution.
| How to backpropagate transposed convolution? | CC BY-SA 4.0 | null | 2023-06-03T10:03:01.660 | 2023-06-03T10:03:01.660 | null | null | 389494 | [
"machine-learning",
"backpropagation",
"convolution",
"transposed-convolution"
] |
617756 | 1 | null | null | 0 | 14 | Lets say I am collecting a sample of averages from a population, but these averages are collected at specific time intervals. I collect sample S1 at time T1, then X amount of time passes and I collect sample S2 at time T2 were T2=T1+X etc. Then I perform a regression on Samples S1,...,Sn over the time T1,...,Tn and get a prediction for Sn+1 at time Tn+1.
Question: If I include this predicted sample Sn+1 in the average calculation of Savg = AVG(S1,S2,...,Sn,Sn+1) does the Central Limit Theorem still hold that these samples (including the predicted one) would be normally distributed (if n is rather small say n=10) given that Sn+1 is not an average but still is a statistic from the group of samples of the population.
| Central Limit Rigging | CC BY-SA 4.0 | null | 2023-06-03T10:08:50.627 | 2023-06-03T10:22:15.960 | 2023-06-03T10:22:15.960 | 389495 | 389495 | [
"regression",
"central-limit-theorem"
] |
617757 | 1 | null | null | 0 | 5 | I am trying to recreate Figure 6 from Heffernan-Tawn (2004) [A conditional approach for multivariate extreme values](https://www.d.umn.edu/%7Eyqi/mydownload/heffemantawn.pdf) using the same datasets as used in the paper. The original time series data of air pollutants is available from [here](https://vincentarelbundock.github.io/Rdatasets/). Taking just winter samples of NO and NO$_2$ (Fig. 6 (a)) as examples and referring to them with subscripts 1 and 2, respectively, and simplifying some of the notation, here is the general procedure:
As far as I understand, the paper uses the following steps to create Figure 6:
- Independently transform the original samples $X_1,X_2$ to Gumbel margins $t_1(X_1), t_2(X_2)=Y_1,Y_2$ by fitting a generalised Pareto distribution to samples exceeding the 70% quantile of each vector.
- Fit the conditional dependence model of $Y_2 | Y_1 = y_1 \geq u_{Y_1}$ for some chosen threshold $u_{Y_1}$. The dependence structure is (generally) given by $$Y_2 = ay_1 + y_1^bZ,$$ $Z\sim \mathcal{N}(\mu_0, \sigma_0)$, where $(a,b,\mu_0,\sigma_0)$. Then $Y_2$ has a conditional distribution of $ \mathcal{N} (\mu(y_1), \sigma(y_1))$. Where $$ \mu(y_1) = ay_1 + y_1^b\mu_0 \\ \sigma(y_1) = y_1^{b}\sigma_0.$$ This is optimised for $(a,b, \mu_0, \sigma_0)$ by maximising the Gaussian log-likelihood function
$$Q= \sum_{k \in S} \log(\sigma(y_1^{(k)}) + \frac{1}{2}\left[ \frac{y_2^{(k)} - \mu(y_1^{(k)})}{\sigma(y_1^{(k)})}\right]^2$$
where $S$ is the set of indices for which $y_1 \geq u_{Y_1}.$
- To generate new extreme samples: sample $N$ new extreme $Y_1^*$ from a standard Gumbel distribution, exceeding a chosen threshold $v_{Y_1}$. Then sample $N$ samples from $Z \sim \mathcal{N}(\mu_0, \sigma_0)$ and use the fitted $(a,b)$ to generate samples of $Y_2$ as in step 2.
- Transform $Y_1, Y_2$ back to $X_1,X_2$ using $t_i^{-1}(Y_i)$ and plot the scatterplot. I am defining the inverse transform for $Y > u_{Y} =-\log(-\log(\tilde{F}(u_{X}))$ as
$$
X_1 = u_{X_1} + \frac{\beta}{\xi}
\left[
\left(
\frac{1 - \text{e}^{-\text{e}^{-Y_1}}}{1-\tilde{F}(u_{X_1})} - 1
\right)^{-\xi}
\right]
$$
and otherwise just the quantile function of the empirical CDF. I think I should also be able to use the quantile function of the fitted generalised Pareto distribution for $\log(Y > u_{Y}$ but the results R function POT::qgpd gave me looked quite strange.
Using R, this is my procedure.
- Transform the $X_1,X_2$ to Gumbel margins according to Equations $(1.3)$ and $(1.4)$ in the paper using the following code. A threshold $\tilde F(u_x) =0.7$ is used in the paper. I obtain $\beta$ and $\xi$ from the function POT::fitgpd(x, threshold=quantile(x, 0.7)).
```
gumbel_transform <- function(x, quant, xi, beta){
F.x <- ecdf(x)
u.x <- quantile(F.x, quant)
F.hat.x <- sapply(x, F.marginal, u.x=u.x, F.x=F.x, xi=xi, beta=beta)
return(list("Y"=-log(-log(F.hat.x)), "F"=F.x))
}
F.marginal <- function(x, u.x, F.x, xi, beta){
if(x <= u.x){
return(F.x(x))
}
else{
pareto.tail <- max(1 + xi * (x - u.x) / beta, 0)^(-1 / xi)
return(1 - (1 - F.x(u.x)) * pareto.tail)
}
}
NO.fit <- POT::fitgpd(winter$NO, quantile(winter$NO, .7))
NO.transform <- gumbel_transform <- function(winter$NO, .7, xi=fit$fitted.values[1], beta=fit$$fitted.values[1])
```
- Fit $Y_2$ conditional on $Y_1 > t_1(u_{X_1})$ by finding the optimal set $(a,b,\mu_0,\sigma_0)$ and defining the objective function $Q$ and its gradient as follows
```
Q <- function (par, data, conditions)
{ # modified from library tsxtreme (changed par limits)
# (5.2) in Heffernan-Tawn (2004)
if (par[4] <= 0) {
return(Inf)
}
if (conditions) {
if (!conditions.verify(par[1], par[2], 0, data = data) ||
!conditions.verify(par[1], par[2], 1, data = data)) {
return(Inf)
}
}
else {
if (par[1] < 0 || par[1] > 1 || par[2] >= 1) {
return(Inf)
}
}
sigma <- par[4] * data[, 1]^(2 * par[2])
if(par[1]==0 & par[2] < 0){
mu = par[5] + par[6] * log(data[, 1]) + par[3] * data[, 1]^par[2]
} else{
mu <- par[1] * data[, 1] + par[3] * data[, 1]^par[2]
}
return(sum(log(sigma) + (data[, 2] - mu)^2/sigma))
}
dQ <- function (par, data, conditions)
{ # modified from tsxtreme
sig <- par[4] * data[, 1]^(2 * par[2])
mu <- par[1] * data[, 1] + par[3] * data[, 1]^par[2]
cen <- data[, 2] - mu
d.a <- -2 * sum(cen * data[, 1]/sig)
d.b <- 2 * sum(log(data[, 1]) * (1 - cen * (par[3] * data[, 1]^par[2] + cen)/sig))
d.m <- -2 * sum(cen/(data[, 1]^(par[2]) * par[4]))
d.s <- sum((1 - cen^2/sig)/par[4])
return(c(d.a, d.b, d.m, d.s))
}
```
and fit for $(a,b,\mu_0,\sigma_0)$
```
a <- runif(1, -1, 1)
b <- runif(1, 0, 1)
fit <- optim(par=c(a, b, 0, 1), fn=Q, gr=dQ, conditions=FALSE,
data=c(Y$NO, Y$NO2), hessian=TRUE, method="BFGS")
```
- Simulate extreme samples $Y_1^*$ from $Y_{\text{NO}}$, exceeding the 99% quantile by sampling uniformly from $q\in[0.99, 1)$ and applying the Gumbel quantile function $y = -\log(-\log(q))$ (see other Stack question here).
- Simulate the same number of samples from $\mathcal{N}(\mu_0, \sigma_0)$ and transform them using the fitted $(a,b)$ to obtain $Y_2^*$.
- Inverse-transform these back to $X_1^*, X_2^*$ using the inverse of (1.3) and (1.4)
```
# Inverse Gumbel transforms
inv.gumbel <- function(y, q, transform, fit){
u <- quantile(transform$F, q)
x <- sapply(y, inv.gumbel.inner, u=u, q=q, transform=transform, fit=fit)
}
inv.gumbel.inner <- function(y, u, q, transform, fit){
if(y <= -log(-log(q))){
return(quantile(fit$F, exp(-exp(-y))))
}else{
inner <- ((1- transform$F(u)) / (1 - exp(-exp(-y))))^fit$shape
outer <- (fit$scale / fit$shape) * (inner - 1) + u
return(outer)
#return(POT::qgpd(exp(-exp(-y)), loc=u, scale=fit$scale, shape=fit$shape))
}
}
X.sample$NO <- inv.gumbel(Y.sample$NO, q=q.u, transform=NO.transform, fit=NO.fit)
```
Here is my Fig. 6 (a). As you can see the NO2 values being simulated are all beneath the equal marginal quantile and it's not right.
[](https://i.stack.imgur.com/TWv4R.png)
Here is how it should look
[](https://i.stack.imgur.com/w6tVp.png)
| Recreating Heffernan-Tawn Fig. 6 in R: samples from multivariate extreme value distribution | CC BY-SA 4.0 | null | 2023-06-03T11:03:32.810 | 2023-06-03T11:03:32.810 | null | null | 363176 | [
"extreme-value",
"multivariate-distribution"
] |
617758 | 2 | null | 617752 | 4 | null | You do have the individuals' data. It's just not formatted in a nice way at the moment, and it will be some work to manipulate the data to extract it.
Let's say Treament X, Replicate Y has this data:
```
Survivors D4 = 10
Survivors D5 = 10
Survivors D6 = 10
Survivors D7 = 9
Survivors D8 = 9
Survivors D9 = 9
Survivors D10 = 4
Survivors D11= 4
Survivors D12 = 4
```
You know you started with 12, and Survivors D4 = 10. Therefore, 2 died between the start and D4. So you have two records with the event in the interval (0,4).
The next event happens between D6 and D7. One more individual died. So you have one record with the event in the interval (6,7).
5 events happened between D9 & D10. So you have five records with the event in the interval (9,10).
No more events happened after this - the rest survived, so their event is in the interval (12, infinity) (4 records).
So you individual-level data set for this dummy example is
```
(0,4)
(0,4)
(6,7)
(9,10)
(9,10)
(9,10)
(9,10)
(9,10)
(12, infinity)
(12, infinity)
(12, infinity)
(12, infinity)
```
Now you have to program that logic for all your groups. This should give data that you can input into a survival analysis software (at least, I know you should be able to do it for the R `survival` package).
| null | CC BY-SA 4.0 | null | 2023-06-03T11:03:55.863 | 2023-06-03T11:03:55.863 | null | null | 369002 | null |
617759 | 1 | null | null | 0 | 2 | Assume $N$ measurements of the same parameter, i.e. a surface temperature $\vartheta_\mathrm{surf}$, were obtained using one single measuring chain (temperature sensor, cables, ADC ...).
Furthermore, the uncertainty of each single temperature reading $\vartheta_\mathrm{surf,i}$ ($i = 1 \dots N)$ was estimated as $u(\vartheta_\mathrm{surf,i})$ using manufacturers' specifications which can be interpreted as a [GUM](https://www.bipm.org/documents/20126/2071204/JCGM_100_2008_E.pdf/cb0ef43f-baa5-11cf-3f85-4dcd86f77bd6) Type B uncertainty.
However, the obtained $\vartheta_\mathrm{surf,i}$ should be used to get an estimate of the surface temperature $\vartheta_\mathrm{surf}$. For this, the mean of all $\vartheta_\mathrm{surf,i}$ $(i = 1 \dots N)$ is calculated. Calculating the [standard deviation of the mean (aka standard error)](https://en.wikipedia.org/wiki/Standard_error) can be interpreted as a GUM Type A uncertainty.
Is there any way to consider the Type B uncertainties when calculating the mean or to "combine" both types of uncertainties? Unfortunately, I did not find any hint about this in the [GUM](https://www.bipm.org/documents/20126/2071204/JCGM_100_2008_E.pdf/cb0ef43f-baa5-11cf-3f85-4dcd86f77bd6) (or misread it).
| Combining GUM Type A and Type B uncertainties | CC BY-SA 4.0 | null | 2023-06-03T11:41:06.293 | 2023-06-03T11:41:06.293 | null | null | 296733 | [
"mean",
"repeated-measures",
"standard-deviation",
"measurement-error",
"uncertainty"
] |
617760 | 1 | null | null | 0 | 19 | Criteria I have to follow:
- Logistic Regression
- Prior Correction to logit after King and Zheng (2001)
- SMOTE, RUS, ROS
- Brier Score, ROC AUC
Without Resampling
Brier Score: 0.05803142823661517 \
ROC AUC Score: 0.8025664068214592
With SMOTE
Brier Score: 0.07421110615980243 \
ROC AUC Score: 0.7992643524420573
With ROS
Brier Score: 0.07338180469112295\
ROC AUC Score: 0.7994733432255637
With RUS
Brier Score: 0.08307287966349892\
ROC AUC Score: 0.8034232690338355
I did some preprocessing (dropping a column, using StandardScaler and using labelencoder). I used 'stratify = y' when performing the train/test split with a ratio of 80/20.
What could be the reason for this?
I'm thankful fo every answer!
Best Regards
| Imbalanced dataset has worse or same evaluation before and after using sampling techniques (RUS, ROS, SMOTE), what can I do? | CC BY-SA 4.0 | null | 2023-06-03T11:44:06.703 | 2023-06-03T12:48:46.717 | 2023-06-03T12:48:46.717 | 389501 | 389501 | [
"regression",
"logistic",
"python",
"unbalanced-classes",
"smote"
] |
617761 | 1 | null | null | 0 | 10 | I have subjects (id) and each of them has a set of points (x and y coordinates) of locations I'm interested in.
I would like to to calculate Standard Deviational Ellipse for each ID taking into account their set of points and then be able to plot them, using R.
I do not have idea to where start from. I saw aspace library which is however no longer in use.
| Standard Deviational Ellipse with R | CC BY-SA 4.0 | null | 2023-06-03T12:35:42.930 | 2023-06-03T15:09:07.623 | 2023-06-03T15:09:07.623 | 195704 | 195704 | [
"r",
"distance",
"gis",
"ellipse"
] |
617762 | 2 | null | 617637 | 0 | null | Here are a couple of charts I did for another purpose (the law of the iterated logarithm) but they may also help here. Both have $100$ different simulations of over $10,000$ cases of independent noise with mean $\mu=0$ and variance $\sigma^2=1$ so standard deviation $\sigma=1$ noise. The $100$ simulations are the same in both charts (see the upper grey line on the left or yellow in the middle), but you get a very different perspective depending on whether you are looking at the averages or the sums.
The first chart shows the cumulated averages. These tend to converge to $0$ as the sample size increases (a law of large numbers result), and the variance of the average is $\frac1n$ so the standard deviation of the average is $\frac1{\sqrt{n}}$, both decreasing with $n$.
[](https://i.stack.imgur.com/yqQWS.png)
The second chart shows the cumulated sums. These do not converge as the sample size increases. The variance of the sum is the sum of the variances so $n$ here and thus the standard deviation of the sum is ${\sqrt{n}}$, both increasing with $n$.
[](https://i.stack.imgur.com/MBjCp.png)
| null | CC BY-SA 4.0 | null | 2023-06-03T13:23:26.197 | 2023-06-03T13:23:26.197 | null | null | 2958 | null |
617763 | 1 | null | null | 1 | 10 | i have the following problem:
I have a sample of individuals who have opinions on certain subjects. I have to group these individuals randomly in groups the same size for every subject and take the mean opinion as a variable in my regression. Some subjects however, have more opinions than others, so i set a the group size to e.g. 5 per subject.
Because i do not see any other way as i can't just ignore the opinions on subjects with more than 5 opinions, i simulate the random group assignment, take their mean opinion, do the regression and save the coefficients, standart errors and R-sqaured. I do that 1000 times.
I know i can just use the mean coefficient as a result but i also want to make a judgement on the statistical significants of the different variables. Am i forced to look at the individual regression and say it was stat. sig. x/1000 or is there a way to get "summary p-values" for these 1000 regression and their variables? The same question would also be regarding R-squared.
| Regression Simulation with random variable - Getting pvalue | CC BY-SA 4.0 | null | 2023-06-03T13:32:14.263 | 2023-06-03T14:19:29.330 | null | null | 389503 | [
"regression",
"p-value",
"simulation",
"stata",
"r-squared"
] |