schuler commited on
Commit
988a04f
·
verified ·
1 Parent(s): 21f9e78

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +14 -9
README.md CHANGED
@@ -1,16 +1,21 @@
1
- ---
2
- library_name: transformers
3
- license: mit
4
- datasets:
5
- - MBZUAI/LaMini-instruction
6
- language:
7
- - en
8
- ---
9
  # Saving 77% of the Parameters in Large Language Models Technical Report
10
  This repository contains experiment results for the [Saving 77% of the Parameters in Large Language Models Technical Report (PDF)](https://www.researchgate.net/publication/388835829_SAVING_77_OF_THE_PARAMETERS_IN_LARGE_LANGUAGE_MODELS_TECHNICAL_REPORT).
11
 
12
  ## Abstract
13
- This technical report demonstrates that large language models (LLMs) can maintain their learning capacity while reducing their non-embedding parameters by up to 77%. We achieve this by adapting a parameter reduction technique originally developed for computer vision, replacing dense layers with an optimized subnetwork that contains grouped pointwise convolutions. Using Microsoft's phi-3-mini-4k-instruct as our baseline, we show that our optimized model (kphi-3) achieves comparable validation loss while using only 15-23% of the original non-embedding parameters. All experiments were conducted on a single NVIDIA L4 GPU within a 3-day timeframe, supporting the democratization of AI research. Our findings suggest that current LLM architectures may be substantially overparameterized, opening possibilities for more efficient model training and deployment.
 
 
 
 
 
14
 
15
  ## Key Findings
16
  - Achieved 77% parameter reduction while maintaining model performance.
 
1
+ ---
2
+ library_name: transformers
3
+ license: mit
4
+ datasets:
5
+ - MBZUAI/LaMini-instruction
6
+ language:
7
+ - en
8
+ ---
9
  # Saving 77% of the Parameters in Large Language Models Technical Report
10
  This repository contains experiment results for the [Saving 77% of the Parameters in Large Language Models Technical Report (PDF)](https://www.researchgate.net/publication/388835829_SAVING_77_OF_THE_PARAMETERS_IN_LARGE_LANGUAGE_MODELS_TECHNICAL_REPORT).
11
 
12
  ## Abstract
13
+ This technical report demonstrates that large language models (LLMs) can maintain their learning capacity while reducing their non-embedding parameters by up to 77%.
14
+ We achieve this by adapting a parameter reduction technique originally developed for computer vision, replacing dense layers with an optimized subnetwork that
15
+ contains grouped pointwise convolutions. Using a 2-layer phi-3-mini-4k-instruct codebase from Microsoft as our baseline, we show that our optimized model (kphi-3)
16
+ achieves comparable validation loss while using only 15-23% of the original non-embedding parameters. Each experiment was conducted on a single NVIDIA L4 GPU within
17
+ a 3-day timeframe, supporting the democratization of AI research. Our findings suggest that current LLM architectures may be substantially overparameterized, opening
18
+ possibilities for more efficient model training and deployment.
19
 
20
  ## Key Findings
21
  - Achieved 77% parameter reduction while maintaining model performance.