Update README.md
Browse files
README.md
CHANGED
@@ -22,7 +22,7 @@ Visualization of Prunce Once for All method from [Zafrir et al. (2021)](https://
|
|
22 |
| Architecture | "The method consists of two steps, teacher preparation and student pruning. The sparse pre-trained model we trained is the model we use for transfer learning while maintaining its sparsity pattern. We call the method Prune Once for All since we show how to fine-tune the sparse pre-trained models for several language tasks while we prune the pre-trained model only once." [(Zafrir et al., 2021)](https://arxiv.org/abs/2111.05754) |
|
23 |
| Paper or Other Resources | [Zafrir et al. (2021)](https://arxiv.org/abs/2111.05754); [GitHub Repo](https://github.com/IntelLabs/Model-Compression-Research-Package/tree/main/research/prune-once-for-all) |
|
24 |
| License | Apache 2.0 |
|
25 |
-
| Questions or Comments | [Community Tab](https://huggingface.co/Intel/bert-
|
26 |
|
27 |
| Intended Use | Description |
|
28 |
| ----------- | ----------- |
|
|
|
22 |
| Architecture | "The method consists of two steps, teacher preparation and student pruning. The sparse pre-trained model we trained is the model we use for transfer learning while maintaining its sparsity pattern. We call the method Prune Once for All since we show how to fine-tune the sparse pre-trained models for several language tasks while we prune the pre-trained model only once." [(Zafrir et al., 2021)](https://arxiv.org/abs/2111.05754) |
|
23 |
| Paper or Other Resources | [Zafrir et al. (2021)](https://arxiv.org/abs/2111.05754); [GitHub Repo](https://github.com/IntelLabs/Model-Compression-Research-Package/tree/main/research/prune-once-for-all) |
|
24 |
| License | Apache 2.0 |
|
25 |
+
| Questions or Comments | [Community Tab](https://huggingface.co/Intel/bert-large-uncased-sparse-90-unstructured-pruneofa/discussions) and [Intel Developers Discord](https://discord.gg/rv2Gp55UJQ)|
|
26 |
|
27 |
| Intended Use | Description |
|
28 |
| ----------- | ----------- |
|