Id
stringlengths 1
6
| PostTypeId
stringclasses 7
values | AcceptedAnswerId
stringlengths 1
6
⌀ | ParentId
stringlengths 1
6
⌀ | Score
stringlengths 1
4
| ViewCount
stringlengths 1
7
⌀ | Body
stringlengths 0
38.7k
| Title
stringlengths 15
150
⌀ | ContentLicense
stringclasses 3
values | FavoriteCount
stringclasses 3
values | CreationDate
stringlengths 23
23
| LastActivityDate
stringlengths 23
23
| LastEditDate
stringlengths 23
23
⌀ | LastEditorUserId
stringlengths 1
6
⌀ | OwnerUserId
stringlengths 1
6
⌀ | Tags
sequence |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
617764 | 1 | null | null | 1 | 12 | I have conducted a survey where respondents had to allocate 100 points over 3 choices. There were 8 questions like this in total, where the choices were the same but the attributes on the choices were different. The independent variables I have are gender, age, highest education, money spend per week on groceries and diet. Now I would like to see if there are any significant differences between the 3 options over all 8 questions, which independent variables have a significant effect on what choices. I haven't done much in statistics so I'm quiet lost.
| Constant sum suvey data! | CC BY-SA 4.0 | null | 2023-06-03T14:15:53.620 | 2023-06-03T14:28:54.717 | 2023-06-03T14:28:54.717 | 7290 | 389505 | [
"survey",
"compositional-data"
] |
617765 | 2 | null | 617763 | 0 | null | While I am not fully following your logic on selecting 5 and only 5 from each person, your overall strategy for determining the P-value via simulation is heading in the right direction.
However, let me suggest the following adjustment. Instead of asking if the regression (by which I take this to mean the slope of the regression line) is significant (or the mean or the $R^2$, etc.), use some fixed value for comparison purposes. Then with your 1,000 simulations, you can ask ¿how many are at or above this value? (or at or below if your result is negative). This count divided by the number of simulations would be your (bootstrap) simulation P-value.
Happy to clarify more if needed.
| null | CC BY-SA 4.0 | null | 2023-06-03T14:19:29.330 | 2023-06-03T14:19:29.330 | null | null | 199063 | null |
617766 | 2 | null | 617752 | 4 | null | As the answer from @AlexJ says (+1), you do have data that can be reformatted in a way that can take advantage of well-documented statistical survival functions. What can be surprising is that, when there's at most one event possible per individual, the individual identities often don't matter.
Recall the formula for the Kaplan-Meier estimator. It's calculated at each event time $t_i$ based on the number that had the event at that time, $d_i$, and the number that were at risk of the event, $n_i$. The estimated survival at time $t$,$\widehat S(t)$, is then:
$$\widehat S(t) = \prod\limits_{i:\ t_i\le t} \left(1 - \frac{d_i}{n_i}\right).$$
There's nothing there that requires knowing individual identities, just how many were at risk and how many experienced the event at each time. It's only in more complicated cases (e.g., more than one event possible per individual, some parametric survival models with time-varying covariates) that you need to keep track of individual identities.
Replicates
In your experimental design, however, you probably do need to keep track of the "Replicate" associated with each individual. That will allow you to take potential correlations of outcomes within a Replicate into account. I suggest that you re-name the Replicate values, now all numbers 1 to 6, to include both the Treatment and the Replicate number, so there's no confusion that Replicate 4 in Control is the same as Replicate 4 in Treatment 5: e.g., use names like "R4C" and "R4T5" to keep them clearly distinguished.
"Interval-censored" data
The format that @AlexJ recommends represents the data as "interval censored"; that is, you know that the event occurred sometime between the two limits of the time interval, but not exactly when. In that format, the 0/1 status data you are thinking of is effectively replaced by the time value at the end of each interval: if it's finite, there was an event during the interval (status = 1), if it's infinite (`Inf` in R) the time to the event is right-censored (status = 0).
Need for regression model
The problem with a Kaplan-Meier analysis in this case is that it will be hard to distinguish treatment differences very well when you have so many treatments. The usual "log-rank" test done as part of such analysis can tell you if there are any differences among treatments, but not which ones are significantly different from control or which differ from each other. To get those types of results it's best to build a regression model of some type.
A [Cox proportional hazards model](https://stats.stackexchange.com/tags/cox-model/info) is a frequent choice. In your situation that would not require keeping track of individuals, either. There are tools available in the R [icenReg package](https://cran.r-project.org/package=icenReg) for Cox models on interval-censored data. I'm not sure, however, that those tools can take the "Replicate" values into account in an efficient way.
One choice, often made in practice even though it's not theoretically exact, would be to perform a standard Cox model where the event time is recorded as the end of the time interval even though you don't know exactly when the event occurred. You can think of that as a model of when you observed the death rather than when it occurred. A `cluster` term in the Cox model could then take "Replicate" values into account.
With your current data setup, you can generate a new data table for that pretty simply. For each time point, for each row in your current data, create a number of rows in the new table equal to the difference in survival since the previous time point,* copying over all the information about Treatment/Replicate, setting the Time for those rows to be that time point, and the Event marker to be 1. At the last time point, make copies with the Event marker of 0 instead for the number that survived at last observation. That way, you end up with a number of rows equal to the original number of animals, with the last observation time and the status at that time (even though this doesn't distinguish which animals are which).
Discrete-time survival
As discussed on [this page](https://stats.stackexchange.com/q/576969/28500), however, with so few time points you might be better off with a discrete-time survival model. That's just a binomial [generalized linear model](https://stats.stackexchange.com/tags/generalized-linear-model/info) with data in a long "person-period" format: one row for each individual at risk during each time interval, with indicators of the time interval, the treatment, the replicate, and 1/0 for event/no-event during the time interval.
If you have a new data table as recommended above, with one row per individual including the time point and status at the last observation, then you can use the `dataLong()` function in the R [discSurv package](https://cran.r-project.org/package=discSurv) to generate the (very long) person-period data format. A choice of `timeAsFactor=TRUE` will be most similar to the Kaplan-Meier and Cox models, in that there is no assumption about a numeric form of baseline survival over time. That generates the long-form data with a "`timeInt`" value representing the time interval and a new outcome variable "`y`" for the status of the individual at the end of that interval.
With such person-period data, you could use a binomial generalized mixed model to take the Replicate values into account, treating Replicate as a random intercept. That's implemented by the `glmer()` function in the R [lme4 package](https://cran.r-project.org/package=lme4). To be closest to a Cox model you could specify the ["cloglog" link](https://stats.stackexchange.com/q/429266/28500) instead of the default "logit" link used for [binomial logistic regression](https://stats.oarc.ucla.edu/r/dae/logit-regression/). The model could look something like this:
```
discreteModel <- glmer(y ~ timeInt + Treatment +
(1|renamedReplicate),
data = dataInLongFormat,
family = binomial(link = "cloglog"))
```
Then you can use post-modeling tools like those in the R [emmeans package](https://cran.r-project.org/package=lme4) to evaluate specific comparisons of interest. For example, you can evaluate the difference of each treatment from control if you specify "trt.vs.ctrl" contrasts in that package.
---
*Before you do this, double-check your data so that you don't have any "resurrections" like you show for Replicate 6 of Control in your sample data.
| null | CC BY-SA 4.0 | null | 2023-06-03T14:57:33.650 | 2023-06-03T15:07:28.223 | 2023-06-03T15:07:28.223 | 28500 | 28500 | null |
617767 | 1 | null | null | 0 | 6 | The question is essentially in the title: Say we have a continuous-time process from which we have drawn samples. I would like to plot them in a histogram. However, since we are in continuous-time, I've read that we somehow need to "reweight" the histogram. How do we need to do that? Unfortunately, I found nothing on the web on that.
| How do we plot a histogram of a continous-time sampler? | CC BY-SA 4.0 | null | 2023-06-03T14:59:48.953 | 2023-06-03T14:59:48.953 | null | null | 222528 | [
"sampling",
"markov-chain-montecarlo",
"monte-carlo",
"markov-process",
"histogram"
] |
617768 | 1 | null | null | 3 | 15 |
### Two sample t-test
Consider the following t-test
Let $x_1, \dots, x_n \sim N(\mu_1,\sigma^2)$ and $y_1, \dots, y_n \sim N(\mu_2,\sigma^2)$ be independent samples. Let's define
- The raw effect
$\theta = \mu_2-\mu_1$
- The estimate of the raw effect
$\hat{\theta} = \bar{Y} - \bar{X} \sim N \left(\theta,\frac{2}{n} \sigma^2 \right)$
- The standard error of the raw effect
$\text{se}(\hat\theta) = \sqrt\frac{2}{n} \hat\sigma = \sqrt\frac{2}{n} \sqrt{\frac{\sum_{i=1}^n (X_i-\bar{X})^2 + \sum_{i=1}^n (Y_i-\bar{Y})^2}{2n -2}} \sim \sigma\sqrt{\frac{2}{n}} \sqrt{\frac{1}{2n-2}} \chi_{2n-2}$
where $a\chi_{2n-2}$ means a scaled chi distribution (also known as a case of the gamma distribution).
The t-statistic is defined as
$$t = \frac{\hat\theta}{\text{se}(\hat\theta)} \sim t_{2n-2,\theta\sqrt{n/2}}$$
### Distribution of the effect conditional on significance
A null hypothesis test will consider estimates $\hat\theta$ significant or not significant according to the value of the t-statistic and some cutoff value $t_c$
$$\begin{array}{rl}
\text{significant: }&|t| \geq t_c\\
\text{non-significant: }&|t| < t_c
\end{array}$$
we consider the distribution of the effect size conditional on the null hypothesis test being significant
$$f(\hat{\theta} \text{ given } |t| \geq t_c)$$
where additional parameters are the the significance level $\alpha$, sample size $n$ and true effect size $d$.
A simulation shows that this distribution looks as following for $\alpha = 0.05$, $n = 5$ and $d = 0.5$.
[](https://i.stack.imgur.com/g4kbl.png)
Asside from making a histogram we can also compute the distribution based on the PDF of the normal distribution $\phi(x)$ and CDF of the chi-squared distribution $F_{\chi^2_\nu}(x)$
$$f(\hat\theta;d,n,t_c) \propto \phi\left(\frac{\hat\theta-d}{\sqrt{2/n}}\right)F_{\chi^2_\nu}\left(\frac{(n-1) n\hat\theta^2}{t_c^2}\right)$$
---
### Question
Can we compute or estimate (in a way that is more efficient than Monte Carlo sampling or integration) the following two properties?
The CDF
$\int_{-\infty}^0 f(\hat{\theta}) d\hat\theta$
and the mean absolute value
$\int_{-\infty}^\infty |\hat\theta| f(\hat{\theta}) d\hat\theta$
The motivation is that these relate to the type S and type M error, as described in questions:
- The probability of making a Type S error, and the average amount of magnification (type M error) as a function of power
- Power paradox: overestimated effect size in low-powered study, but the estimator is unbiased
- Understanding Gelman & Carlin "Beyond Power Calculations: ..." (2014)
---
R-code for image
```
### settings
set.seed(1)
d = 0.2
n = 5
nu = 2*n-2
alpha = 0.05
n.sim = 10^5
### boundary for alpha level t-test
tc = qt(1-alpha/2, df = nu)
#### simulations of t-test
effect = rep(NA,n.sim)
significant= rep(NA,n.sim)
for (i in 1:n.sim) {
X = rnorm(n)
Y = rnorm(n,d)
significant[i] = t.test(X,Y)$p.value < alpha
effect[i] = mean(Y)-mean(X)
}
### significance tests
#effect = rnorm(n.sim,d,sqrt(2/n))
#s = sqrt(rchisq(n.sim,nu)/nu)*sqrt(2/n)
#significant = abs(effect/s)>tc
### plot histogram
hist(effect[significant], breaks = seq(-3.5,3.5,0.1), freq = 0)
### add computation of density curve
power = pt(-tc,nu,d/sqrt(2/n)) +
1-pt(tc,nu,d/sqrt(2/n))
power
xs = seq(-4,4,0.01)
#lines(xs,dnorm(xs,d, sqrt(2/n))/power)
pfilter = (xs/tc)^2*(nu*n/2)
lines(xs,dnorm(xs,d, sqrt(2/n))*pchisq(pfilter,nu)/power)
```
| properties of a distribution obtained by filtering normal distributed variables by the result of a t-test | CC BY-SA 4.0 | null | 2023-06-03T14:59:55.570 | 2023-06-03T22:15:57.737 | 2023-06-03T22:15:57.737 | 164061 | 164061 | [
"distributions",
"mathematical-statistics",
"t-test"
] |
617769 | 1 | null | null | 1 | 17 | Moving window statistics (see [this](https://www.gnu.org/software/gsl/doc/html/movstat.html), for example) are sample statistics calculated over moving/rolling windows over a time-series.
For example, given the time-series $\{x_1,x_2,\dots\}$ one can construct moving windows of width $W$ and calculate statistics $S(\mathrm{window})$ like this:
$$
\begin{matrix}
x_1 & x_2 & x_3 & x_4 & & & & \Rightarrow S(\cdot) & \Rightarrow S_4\\
& x_2 & x_3 & x_4 & x_5 & & & \Rightarrow S(\cdot) & \Rightarrow S_5\\
& & x_3 & x_4 & x_5 & x_6 & & \Rightarrow S(\cdot) & \Rightarrow S_6\\
& & & x_4 & x_5 & x_6 & x_7 & \Rightarrow S(\cdot) & \Rightarrow S_7
\end{matrix}
$$
The $S_t$ could be the moving average $\mu_t$ or the moving variance $\sigma_t^2$:
$$
\begin{aligned}
\mu_t &= \frac1W \sum_{i=1}^W x_{t-i+1} = \mathrm{avg}\left\{
x_t, x_{t-1}, \dots, x_{t-W+1}
\right\}\\
\sigma_t^2 &= \frac1W \sum_{i=1}^W (x_{t-i+1} - \mu_t)^2=
\mathrm{var}\left\{
x_t, x_{t-1}, \dots, x_{t-W+1}
\right\}
\end{aligned}
$$
---
My question is what do these moving statistics actually estimate?
- Does the moving average estimate the expected value of the corresponding stochastic process, $\mu_t = \mathbb{E}X_t$? Note that moving averages typically vary over time, so it would seem that they're supposed to estimate $\mu_t$, which varies in time as well.
- Does the moving variance estimate $\sigma_t^2 = \mathbb{V}X_t$?
- And so on. Do these kind of statistics estimate anything? Or are they just filtering out "noise"?
- Given that $x_t$ are not independent, why does it make sense to compute sample statistics for them?
I know that for ergodic processes the expanding (!) average, $\frac1T \int_0^T X(t)\mathrm{d}t$, estimates $\mathbb{E}X_t = \mu_t = \mu$, which doesn't change in time. However, for moving averages/variances/etc. we're constantly discarding the "oldest" observation and adding the most recent one to the window, so the estimate changes in time.
Are there any theoretical treatments of this issue? Do moving statistics have anything to do with ergodicity, stationarity etc?
| What is the probabilistic meaning of moving window statistics? | CC BY-SA 4.0 | null | 2023-06-03T15:07:07.753 | 2023-06-03T15:34:24.773 | 2023-06-03T15:34:24.773 | 328515 | 328515 | [
"time-series",
"moving-window"
] |
617770 | 1 | null | null | 0 | 3 | I read a discussion about missing data imputation using the largest autocorrelation in lag 1.
I read it here: [R imputation of missing value using autocorrelation](https://stackoverflow.com/questions/76241618/r-imputation-of-missing-value-using-autocorrelation/76243630?noredirect=1#comment134458243_76243630).
I want to know the theory that explains it. Can anyone provide journal/book information about this?
| References fill in missing values with autocorrelation | CC BY-SA 4.0 | null | 2023-06-03T15:11:51.673 | 2023-06-03T15:19:48.163 | 2023-06-03T15:19:48.163 | 385401 | 385401 | [
"time-series",
"autocorrelation",
"missing-data",
"lags"
] |
617771 | 1 | 617792 | null | 0 | 48 | [An Introduction to Statistical Learning
with Applications in R](https://hastie.su.domains/ISLR2/ISLRv2_website.pdf) 2nd edition by Hastie et al. says that
>
Statistical learning refers to a set of tools for making sense of complex datasets.
How is it different from Machine Learning then?
Is Machine Learning a subset of Statistical Learning?
| What is the difference between Statistical Learning and Machine Learning? | CC BY-SA 4.0 | null | 2023-06-03T16:09:50.227 | 2023-06-03T21:21:51.590 | 2023-06-03T18:19:14.320 | 22311 | 109372 | [
"machine-learning"
] |
617772 | 1 | 617777 | null | 1 | 108 | [IBM says that](https://www.ibm.com/topics/deep-learning):
>
Deep learning is a subset of machine learning, which is essentially a neural network with three or more layers.
Therefore, the question is, is there anything called Shallow Learning that uses less than three layers of an artificial neural network?
| What is the opposite of deep learning? | CC BY-SA 4.0 | null | 2023-06-03T16:24:47.577 | 2023-06-03T16:32:39.093 | null | null | 109372 | [
"neural-networks"
] |
617773 | 1 | 617778 | null | 1 | 39 | If the correlation was significant enough (however you define it), does it necessarily show that there is some underlying cause even if it's not direct causation?
| Does significant correlation imply at least some common underlying cause? | CC BY-SA 4.0 | null | 2023-06-03T16:28:27.347 | 2023-06-03T16:58:25.450 | 2023-06-03T16:58:25.450 | 53690 | 389512 | [
"correlation",
"causality"
] |
617774 | 1 | null | null | 0 | 13 | I'm creating a epidemic simulation program in Python, but I'm stuck trying to determine the probability of being infected (if exposed).
I have an m x n grid with 0's and '1 (0's represent healty cells, 1's represent infected cells). For simplicity i'll use a 3x3 grid for this example.
```
0 1 1
0 0 0
0 1 0
```
The center value at coords (1, 1) will take the sum of its neighbours (sum = 3). This means that this particular cell is exposed to a virus from three different infected cells.
Let's say that the probability of catching the virus is 0.25 (assuming 0.0 immunity) for every exposure (they are independent of each other). I have a different grid with the immunity values (ranging from 0 to 1 where 1 is immune).
I cannot for the love of Jesus figure out how to set up the math to determine the probability of the cell being infected.
Please help :)
| A simple probability model for an epidemic simulation | CC BY-SA 4.0 | null | 2023-06-03T07:28:58.273 | 2023-06-03T16:58:24.960 | null | null | 389513 | [
"python",
"numpy",
"probability"
] |
617775 | 2 | null | 617774 | 0 | null | You can create a kernel `k` and convolve on the grid `a`. This gives you count of infected neighbors for each cell. Then you can calculate as shown below. I am not an epidemiology expert. I have used simple logic to solve this.
```
import numpy as np
from scipy.ndimage import convolve
infection_prob = 0.25
# immunity = np.random.standard_normal(9)
a = np.array([[0,1,0], [1,0,0], [0,0,1]])
# immunity = immunity.reshape(a.shape)
k = np.ones(a.shape)
x,y = 1,1
k[x,y] = 0
c = convolve(a, k, mode="constant")
print((1-infection_prob)**c[x,y])
>> 0.421875
```
| null | CC BY-SA 4.0 | null | 2023-06-03T09:12:02.773 | 2023-06-03T16:48:02.847 | 2023-06-03T16:48:02.847 | 183128 | 183128 | null |
617776 | 2 | null | 617774 | 0 | null | If probability of infection occurring from a single exposure event is $0.25$ then probability of an infection occurring from three independent events is $1-(1-0.25)^3= 1 - 0.75^3 = 0.578125.$
| null | CC BY-SA 4.0 | null | 2023-06-03T16:21:16.080 | 2023-06-03T16:58:24.960 | 2023-06-03T16:58:24.960 | 362671 | 389516 | null |
617777 | 2 | null | 617772 | 5 | null | Yes, for instance multi-layer perceptron (MLP) neural networks (with two layers of modifiable weights), support vector machines, Radial Basis Function (RBF) neural networks, Kernel Logistic Regression, Gaussian Processes. There are a lot of "shallow" learning methods, and they are still very useful even if not very fashionable at the moment. Linear regression and standard logistic regression are also "shallow learning".
Oddly enough, shallow learning methods often out-perform deep learning methods, but I suspect publication bias means that this isn't as widely appreciated as it should be.
| null | CC BY-SA 4.0 | null | 2023-06-03T16:32:39.093 | 2023-06-03T16:32:39.093 | null | null | 887 | null |
617778 | 2 | null | 617773 | 2 | null | It is not "necessarily". The usual "post hoc ergo propter hoc" fallacy is the fallacy that says correlation always implies causation, or even that you can infer causality. This is still a fallacy. However, if you add another condition to the correlation - namely, that of mechanism - then you get Reichenbach’s common cause principle: "No correlation without causation."
As an example, the rooster crowing causing the sunrise is a "post hoc" because there's no plausible mechanism for a lone rooster crowing to cause the sunrise - although, interestingly enough, there could be causation the other way: does the sunrise cause the rooster to crow? Perhaps.
We've been conditioned to avoid post hoc so strongly that I think the Reichenbach common cause principle needs trumpeting. The Reichenbach common cause principle says that if there's a mechanism, you should look for causality somewhere: the principle doesn't say which direction the causality is, and it also doesn't rule out a third common cause (confounding variable).
Another way of saying it is
>
Correlation doesn’t imply causation, but it does waggle its eyebrows suggestively and gesture furtively while mouthing ‘look over there’. - Randall Munroe.
| null | CC BY-SA 4.0 | null | 2023-06-03T16:43:42.710 | 2023-06-03T16:43:42.710 | null | null | 76484 | null |
617779 | 1 | null | null | 0 | 7 | I need to conduct a mediation analysis for this model: continuous IV, a continuous moderator and a categorical DV (3 categories). How do I do this using SPSS?
Normally I do mediation analyses using `PROCESS`, but I understand it does not work for categorical DVs, so I guess I need to use logistic regression, but how should it exactly work with a continuous mediator?
| Mediation analysis with categorical DV | CC BY-SA 4.0 | null | 2023-06-03T16:47:27.033 | 2023-06-03T17:27:28.927 | 2023-06-03T17:27:28.927 | 362671 | 389514 | [
"spss",
"mediation"
] |
617780 | 1 | null | null | 0 | 5 | I have a sample of data in R looking at vendors and I would like to compare the % of vendors which sell a product for each vendor type, as in the table below:
[](https://i.stack.imgur.com/Opgk0.png)
I have further data for which I want to calculate the same sort of proportions, but also for 4 different towns. From an overview it seems like some of the categories are quite different, so I would like to do some sort of statistical analysis. While I could do a Chi-squared, it would also be interesting to know where the differences lie. Please let me know if there are any suggestions, as my stats knowledge is limited to 1st year uni.
| What test to do when comparing multiple proportions? If Chi-squared, how do I see where differences lie? | CC BY-SA 4.0 | null | 2023-06-03T16:54:45.580 | 2023-06-03T16:54:45.580 | null | null | 389515 | [
"r",
"chi-squared-test",
"multiple-comparisons",
"proportion"
] |
617781 | 1 | null | null | 0 | 2 | I have a model of some financial data that achieves an R2 of ~0.01 (1%) using RidgeCV -- this is about what I expect. I'm exploring building the equivalent model using SGDRegressor so I can leverage partial_fit to do incremental training over larger than memory data sets... Unfortunately my SGDRegressor does not converge to the same R2 as RidgeCV even for equivalent L2, squared_loss and alpha values. I've tried fiddling with many parameters but it's not clear how to pick params...
Any advice on getting SGDRegressor to converge similiarly to RidgeCV? E.g if I have a 4M row data set is it best to call 10000's of epochs across many small (32 rows) batches with partial_fit or do large batches (500,000 rows)?
```
EPOCH_COUNT=10000
CHUNK_SIZE = 500000
# Initialize SGDRegressor and StandardScaler
sgd = SGDRegressor(verbose=0,
tol=1e-4,
loss='squared_error',
penalty='l2',
alpha=0.1,
learning_rate='constant',
eta0=0.01,
fit_intercept=True, # play with this
shuffle=True,
)
scaler = StandardScaler()
regmodel = make_pipeline(StandardScaler(), sgd)
r2s=[]
r2_test=[]
# Load data in chunks
for i in range(0, len(df_targets), CHUNK_SIZE):
df_chunk = df_targets.iloc[i:i+CHUNK_SIZE]
# split df_chunk into train and set set using sklearn
chunk, chunk_test = train_test_split(df_chunk, test_size=0.1, shuffle=False)
y = chunk[hyper['target_name']].values
X = chunk[features].values
X_test = chunk_test[features].values
y_test = chunk_test[hyper['target_name']].values
if len(df_chunk) != CHUNK_SIZE:
print('skipping chunk {} as len(X) = {} != CHUNK_SIZE = {}'.format(i, len(X), CHUNK_SIZE))
break
# Hack
if i == 0:
# Fit the scaler on the first chunk only
print('fitting scaler on first chunk')
scaler = scaler.fit(X)
X = scaler.transform(X, copy=False)
X_test = scaler.transform(X_test, copy=False)
for j in range(0, EPOCH_COUNT):
sgd.partial_fit(X,y)
level_2_r2_score = sgd.score(X,y)
level_2_r2_score
r2s.append(level_2_r2_score)
r2_score_test = sgd.score(X_test,y_test)
r2_test.append(r2_score_test)
print('chunk {} - EPOCH {} - === > r2_train = {} - r2_test = {} '.format(i, j, level_2_r2_score, r2_score_test))
```
| Getting SGDRegressor to converge to equivalent RidgeCV R2 results | CC BY-SA 4.0 | null | 2023-06-03T17:01:54.893 | 2023-06-03T17:01:54.893 | null | null | 374984 | [
"stochastic-gradient-descent"
] |
617782 | 1 | null | null | 0 | 13 | I have a logistics regression model with 1 dependent variable DV and 6 independent variables IV1, ..., IV6. I'd like to control for the effect of a variable named CV.
Now in R, I suppose we can control for CV by "adding" it in the formula to become:
```
DV ~ IV1 + IV2 + IV3 + IV4 + IV5 + IV6 + CV
```
But there's another alternative, by "multiplying it" in the formula which I've been seeing a bit:
```
DV ~ IV1*CV + IV2*CV + IV3*CV + IV4*CV + IV5*CV + IV6*CV
```
What are the perks for controlling for a variable in the first option compared to the second option? When controlling for variable, is it necessary to control for the interactions too?
| Controlling variable in R via "+" or asterisk "*" | CC BY-SA 4.0 | null | 2023-06-03T17:30:06.933 | 2023-06-03T17:30:06.933 | null | null | 389519 | [
"r",
"regression",
"logistic",
"linear"
] |
617784 | 2 | null | 617619 | 0 | null | We can write $$\begin{align*}
p(y = 1 \mid x) &= \frac{p(x \mid y = 1)p(y = 1)}{p(x \mid y = 1)p(y = 1) + p(x \mid y = 0)p(y = 0)} \\
&= \frac{1}{1 + \frac{p(x \mid y = 0)p(y = 0)}{p(x \mid y = 1)p(y = 1)}} \\
&= \frac{1}{1 + \exp \log \left\{ \frac{p(x \mid y = 0)p(y = 0)}{p(x \mid y = 1)p(y = 1)} \right\}}.
\end{align*}$$
We want $\log \left\{ \frac{p(x \mid y = 0)p(y = 0)}{p(x \mid y = 1)p(y = 1)} \right\}$ to be an affine function of $x$. Indeed this is true if $p(x \mid y)$ is in the exponential family, since if
$$p(x \mid y) = h(x) \exp\{\eta(y)^T x - a(\eta(y))\}$$
Then
$$\log \left\{ \frac{p(x \mid y = 0)p(y = 0)}{p(x \mid y = 1)p(y = 1)} \right\} = (\eta(0) - \eta(1))^T x - (a(\eta(0)) + a(\eta(1))).$$
So
$$p(y = 1 \mid x) = \frac{1}{1 + \exp(\beta_0 + \beta_1 x_1 + \cdots + \beta_p x_p)}$$
where $$\beta_0 = \log\{\frac{p(y=0)}{p(y=1)}\} - (a(\eta(0)) + a(\eta(1)))$$
and $$\beta_j = (\eta(0) - \eta(1))_j.$$
| null | CC BY-SA 4.0 | null | 2023-06-03T18:01:35.040 | 2023-06-03T18:01:35.040 | null | null | 340035 | null |
617785 | 1 | null | null | 1 | 19 | Our goal is to determine optimal cut-off test scores for course placement. The course placement has already been manually assigned to each test-taker. The goal is to replace this manual labor with the calculated cut-off test scores, so that a future test-taker from a similar group will be automatically placed into an optimal course.
We're looking for cut-off scores such as this:
- 0-9: Course A
- 10-14: Course B
- 15-19: Course C
- 20-24: Course D
- 25-30: Course E
In this example, if a student answers 14 questions correctly, they'd be placed into Course B.
The variables in this analysis are
- Independent: The screening test score, which is a continuous variable ranging from 0-30
- Dependent: Course placement, which is an ordinal variable, or a categorical variable for which there is a clear ordering of the category labels (i.e. Course B is more advanced than Course A, Course C is more advanced than Course B, and so forth).
Approaches we've considered, along with our concerns about them:
- Set the cut-off score for a specific course at one standard deviation above the median score for all the students who got placed into that course. (This uses the median instead of the mean, because there are some outliers in the data where some students who scored really high got placed into a low-level course).
Concerns: Imbalanced data. One of the courses had a disproportionately higher number of students placed into it, which inflates the accuracy of placement.
- Ordinal logistic regression — We've used this model to obtain the probabilities of being in a specific course, given a test score.
Concerns: These are probabilities and some of them overlap equally, so how can we decide with certainty which score value the cutoff should fall on? Is regression the correct approach?
[](https://i.stack.imgur.com/mDsMB.png)
How would you recommend we go about creating cut-off scores for this course placement test and evaluating it?
| How to create optimal cut-off scores for a test placing students into different courses | CC BY-SA 4.0 | null | 2023-06-03T18:07:42.730 | 2023-06-04T02:55:41.993 | 2023-06-04T02:55:41.993 | 97380 | 97380 | [
"probability",
"logistic",
"roc",
"consistency",
"threshold"
] |
617786 | 1 | null | null | 0 | 6 | I've been trying to solve the following exercise:
-> Consider a dataset with two points in 1D: (x1 = 0, y1 = −1) and (x2 = √2, y2 = 1).
Consider also the mapping to 3D φ(x) = [1, √2x, x2].
a) Find the optimal weights w0. Hint: recall that the separation margin between the two
classes is ρ = 2/||w0k||
b) Find the optimal bias b0.
c) Write the discriminant fucntion g(x) = wT0 φ(x) + b0 as an explicit function of x.
Note: If you did not solve the two previous points, use the literals corresponding to
w0 and b0.
However, I cannot find a proper way to find the optimal parameters. Most equations I find require additional information. Could you help in this matter?
| How to calculate the optimal weights and bias in SVM (by hand) | CC BY-SA 4.0 | null | 2023-06-03T18:22:23.787 | 2023-06-03T18:22:23.787 | null | null | 289006 | [
"svm",
"bias",
"weights"
] |
617787 | 1 | null | null | 1 | 8 | Suppose we have the dyadic regression
$$y_{ij}\sim N((\mathbf{x}_j - \mathbf{x}_i)'\boldsymbol{\beta} + \theta_i + \theta_j,\sigma^2),$$
where $\mathbf{x}_i$ are node-level covariates, $\boldsymbol{\beta}$ and $\sigma^2$ are model parameters, and $\theta_i, \forall i$ are node-level random effects. My question: do I need to have an intercept term in this regression model, or are the $\theta$'s capable of recovering the intercept?
| Do I need an intercept in a dyadic regression model if I have random effects? | CC BY-SA 4.0 | null | 2023-06-03T19:02:38.200 | 2023-06-03T19:02:38.200 | null | null | 257939 | [
"regression",
"mixed-model",
"intercept",
"dyadic-data"
] |
617788 | 1 | null | null | 1 | 10 | I am unclear on how the R2 output from adonis2 is interpreted. Is it that a particular factor "explains" the differences in community composition? And thus, would R2 be interpreted as typically is in univariate analyses?
| Interpretation of R2 from PERMANOVA (adonis2) in vegan package | CC BY-SA 4.0 | null | 2023-06-03T19:16:11.753 | 2023-06-03T20:48:28.343 | null | null | 162869 | [
"r",
"vegan"
] |
617789 | 1 | null | null | 1 | 7 | Suppose $X$ is a $d$-dimensional random vector. The coordinates follow an auto-regressive structure:
$$
X_{1} \sim N(\mu_1,\sigma^2_1), \qquad X_{j}|X_{< j} \sim N(a^T_j X_{<j},~ \sigma^2_j), \qquad a_j \in \mathbb{R}^{j-1}, \qquad 2 \le j \le d,
$$
where $X_{<j}$ denotes the vector containing the first $j-1$ coordinates of $X$.
I am wondering if this is a standard model in the literature or if it has a common name. Alternatively, is it a special case of some well known model?
More generally, we might have $X_{j}|X_{< j} \sim N(f_j (X_{<j}),~ \sigma^2_j)$ for some function $f_j:\mathbb{R}^{j-1}\to \mathbb{R}$, or potentially have the variance depent on $X_{<j}$ in a linear/non-linear way
| Name for the following auto-regressive type data generating model | CC BY-SA 4.0 | null | 2023-06-03T19:53:15.127 | 2023-06-03T20:45:44.013 | 2023-06-03T20:45:44.013 | 55946 | 55946 | [
"time-series",
"autoregressive",
"vector-autoregression"
] |
617790 | 2 | null | 617788 | 0 | null | The $R^2$ values reported are indeed coefficients of determination, but they are all partial-$R^2$s (of at least the second-type). That is to say, the partial $R^2$ values reported most likely will not sum to 100%. However, each factor can be viewed as explaining (at least) $R^2\times 100\%$ of the variability in the dependent variables.
| null | CC BY-SA 4.0 | null | 2023-06-03T20:48:28.343 | 2023-06-03T20:48:28.343 | null | null | 199063 | null |
617791 | 1 | null | null | 0 | 6 | Let's say I want to flip a coin a billion times and count how many heads I get. But I don't have time to actually flip a coin a billion times.
Assuming I have a random number generator than can generate a number between 0 and 1, is there a technique I can use to simulate the outcome of 1 billion random coin tosses, such that it's statistically equivalent to actually doing a billion random coin tosses?
I have been thinking about this a lot and Googling, but haven't come up with a solution. It feels to me like we could plot a curve like this (please forgive my horrible sketch):
[](https://i.stack.imgur.com/jTTrX.png)
Where x is the chance of getting fewer than f(x) heads.
So when x is 0 (0% chance), f(x) is also 0. There's a 0% chance of getting fewer than 0 heads.
When x is 0.5 (50% chance), f(x) is half a billion. There's a 50% chance of getting fewer than half a billion heads.
When x is 1 (100% chance), f(x) is a billion (or a billion and one, I suppose). There's a 100% chance of getting fewer than a billion heads.
Moving right from 0 or left from 1, f(x) very quickly goes to very close to half a billion. This reflects how it would be extraordinarily unlikely to get, say, 400 million or 600 million heads.
My thinking is that if I had a correct equation for f(x), then I could generate a random number between 0 and 1, calculate f(x), and that would be the number of heads from my one billion "trials".
Would this concept actually work and is there a known approach (even if completely different from what I'm thinking) for solving this problem?
| Simulating the outcome of X number of random trials | CC BY-SA 4.0 | null | 2023-06-03T21:06:31.243 | 2023-06-03T21:06:31.243 | null | null | 353918 | [
"probability",
"random-variable"
] |
617792 | 2 | null | 617771 | 4 | null | I think any answers to this question will be verging on opinion-based, but I would say there is a gradient from
- theoretical or pure statistics, focused on rigorous proofs of the properties of various statistical procedures or tests;
- applied statistics, more interested in how procedures can be used with real data sets;
- computational statistics, which focuses on algorithms and computational properties of procedures;
- statistical learning, which asks how we can use computationally efficient, scaleable procedures to learn about patterns in data, but still using a statistical framework to understand how these procedures work;
- machine learning, which is also interested in computationally efficient, scaleable procedures, but is less interested in the statistical properties of the answers;
- artificial intelligence, which generalizes machine learning to a much broader framework of 'computer architectures to solve problems'.
Statistical learning and machine learning in particular are very similar, but statistical learning is a little closer to statistics and machine learning is a little closer to computer science. Someone who works in SL is more likely use confidence intervals to describe uncertainty, while someone who works in ML would (more likely) use risk bounds. People who do SL are generally interested in both prediction and inference, while ML tends to be more focused on prediction (although not exclusively: quantifying variable importance can be thought of as a form of inference). For what it's worth, [Wikipedia says](https://en.wikipedia.org/wiki/Machine_learning)
>
Some statisticians have adopted methods from machine learning, leading to a combined field that they call statistical learning
There a big overlaps between each step in this gradient, and it's arguably not a strict gradient (for example, you could argue that computational statistics and statistical learning are overlapping subsets of applied statistics).
| null | CC BY-SA 4.0 | null | 2023-06-03T21:10:44.450 | 2023-06-03T21:21:51.590 | 2023-06-03T21:21:51.590 | 2126 | 2126 | null |
617793 | 1 | null | null | 0 | 6 | I have contradictory output when using R survival analysis predictions with the "response" type.
I coded the following and ran:
```
y <- rweibull(1000, shape=2, scale=exp(4))
e <- rep(1, 1000)
mean(y) # 49.50
median(y) # 46.40
library(survival)
model <- survreg(Surv(y, e) ~ 1)
summary(model)
predict(model, type="response", newdata=data.frame(1)) # 55.87
predict(model, type="quantile", newdata=data.frame(1), p=c(0.5)) # 46.54
```
However, I checked the output with the expected value and median of the fitted Weibull:
```
scale <- exp(model$coef[1]); scale; log(scale)
shape <- 1/model$scale; shape
expected <- scale*gamma(1+1/shape); expected # 49.51
med <- qweibull(0.5, shape=shape, scale=scale); med # 46.54
```
In sum, the quantiles coincides but the expected values differ. What `predict.survreg` with `type="response"` does?
| R Survival Survreg Predict with "response" type | CC BY-SA 4.0 | null | 2023-06-03T23:12:10.920 | 2023-06-04T00:13:02.877 | 2023-06-04T00:13:02.877 | 345611 | 389527 | [
"regression",
"survival",
"predictive-models"
] |
617794 | 1 | null | null | 0 | 7 | I have a simulated clinical data set that has multiple observations of patients over time for a given endpoint. I'm trying to re-create an MMRM model in R that was initially done in STATA using PROC MIXED with an UNSTRUCTURED covariance structure, but I only have a written description of the model:
fixed effects
- arm
- visit
- arm * visit
- endpoint baseline value
random effects
- linear slope of visit
dependent variable
- Change from baseline in endpoint
My issue is that when I am fitting an MMRM using the code below:
```
mmrm <- lme(endpoint ~ arm + visit + arm*visit + baseline,
data = df,
method = "REML",
na.action = na.omit,
random = ~visit|pt_id,
correlation = nlme::corSymm(form = ~visit|pt_id),
weights = nlme::varIdent(form = ~1|visit),
control=lmeControl(msMaxIter = 200, opt = "optim"))
```
Plotting the LS means (shown below) shows a linear slope over time, however, when I plot the observed data the non-linearity is evident. Additionally, the LS means of the original model also show varying slopes over time and look very similar to the plot of the observed data.
Why is the model not allowing the slope of the change in baseline in the endpoint (dependent variable) to vary over time?
Observed data (mean +/- se)
[](https://i.stack.imgur.com/RRDHw.png)
MMRM LS means +/- se
[](https://i.stack.imgur.com/ZNV55.png)
Example Data
```
pt_id arm visit baseline endpoint
1 1 1 0.77 0.76
1 1 2 0.77 0.70
1 1 3 0.77 0.72
1 1 4 0.77 NA
2 1 1 0.63 0.65
2 1 2 0.63 0.60
2 1 3 0.63 0.57
2 1 4 0.63 0.55
3 0 1 0.77 0.78
3 0 2 0.77 0.70
3 0 3 0.77 0.72
3 0 4 0.77 0.65
4 0 1 0.63 0.65
4 0 2 0.63 0.60
4 0 3 0.63 NA
4 0 4 0.63 NA
```
| MMRM not generating random slopes over time | CC BY-SA 4.0 | null | 2023-06-03T23:31:23.367 | 2023-06-03T23:31:23.367 | null | null | 216450 | [
"r",
"mixed-model",
"clinical-trials"
] |
617797 | 1 | null | null | 1 | 9 | There are many real-world phenomena in which a variable of a population follows the Pareto distribution. I am wondering, what are the sufficient conditions for the distribution to be Pareto? And if it is possible to go further, what are the necessary and sufficient conditions?
I am wondering if there is anything analogous to the central limit theorem, which explains how and why the normal distribution arises when we consider sample means for sampling with replacement. Granted, the Pareto distribution commonly applies to populations (if I'm not mistaken), so my analogy is not perfect, but I hope my point is communicated nonetheless.
To put it shortly, is there a characterization theorem that explains why this distribution tends to be so naturally occurring? Even better, how can I reasonably tell ahead of time when and when not something is going to follow the Pareto distribution?
| Are there conditions for which the Pareto distribution arises? Are there characterization theorems of the Pareto distribution? | CC BY-SA 4.0 | null | 2023-06-04T01:34:24.307 | 2023-06-04T02:07:15.177 | 2023-06-04T02:07:15.177 | 316764 | 316764 | [
"distributions",
"central-limit-theorem",
"pareto-distribution"
] |
617798 | 1 | null | null | 0 | 5 | After differencing I saw that my constant/intercept is not statistically significant. Does anybody know how to fit the same model without the const term?
im using statsmodels.tsa.arima.model
To give a relative example I have: `ARIMA(data, order=(3,0,0))` an AR(3) model and say it that the second coefficient is insignificant. I can get rid of it by typing
```
ARMA(data,order=([1, 3], 0, 0)
```
but how can I get rid of coefficient??
| python ARIMA remove intercept | CC BY-SA 4.0 | null | 2023-06-04T03:21:51.860 | 2023-06-04T04:02:50.557 | 2023-06-04T04:02:50.557 | 373791 | 373791 | [
"statistical-significance",
"python",
"arima",
"intercept"
] |