Question about sparsemixer

#3
by DhanOS - opened

Hi there!
First of all - great work! :)

I'm experimenting with sparsemixer on a different model architecture and I'm looking at the sparsemixer's code for 2 reasons:

  1. to make it work with DeepSpeed (DeepSpeed hangs after a few steps during code testing)
  2. to make it work with top_k>2

I have 3 questions if you don't mind:

  1. So, there are 2 almost identical blocks of code in sparsemixer that could be put in a loop, I guess. The only difference is line 834 (the first block, for the first expert):
            torch.rand_like(max_scores) > 0.75 # Heun's third-order method: f(x) - f(0) = .25 f'(x) + .75 f'(x/3.)

and line 881 (for the second expert):

            torch.rand_like(max_scores).uniform_() > 0.75 # Heun's third-order method: f(x) - f(0) = .25 f'(x) + .75 f'(x/3.)

is this draw from a uniform distribution added there for any particular reason? It's drawing two times now - once during rand_like call and then during uniform_ call.

  1. Have you tested it with DeepSpeed ZeRO3? It may be an issue with how I implemented sparsemixer into my experiment, but the same model with softmax+topk works (trains) just fine (the modeling code contains a workaround for hangs with ZeRO3, I understand what the problem is).

  2. Are there any additional considerations to make it work with top_k>2, or is top_k=2 just implemented this way for this experiment with the model you trained?

Thank you

Microsoft org
  1. there is no particular reason and the second .uniform_() could be removed;
  2. Yes, we tried that. We ends up using ZeRO1 + PP + activation checkpointing, which yields the best throughput (much better than ZeRO3);
  3. Yes, there are:
  • we model the sampling of the top-k as iterative sampling, which brings issues. As an example, if k is 4, and the four expert you get is a,b,c,d. Sampling a->b->c->d and d->c->b->a yield the same set of experts, but are treated separately. This causes complications on gradient computation
  • when there are multiple activated experts, it is a little bit tricky to integrate the first-order and third-order estimator. To start, I would recommond you to use the 1st-order estimator only (use set mask_for_one to 1.0 in all cases)

Sign up or log in to comment