QubitPi commited on
Commit
2a9d5df
·
unverified ·
1 Parent(s): 5b00b76

Optimize model loading with cache (#83)

Browse files
Files changed (2) hide show
  1. .github/workflows/ci-cd.yaml +1 -0
  2. app.py +1 -1
.github/workflows/ci-cd.yaml CHANGED
@@ -21,6 +21,7 @@ on:
21
 
22
  jobs:
23
  sync-to-huggingface-space:
 
24
  runs-on: ubuntu-latest
25
  steps:
26
  - uses: actions/checkout@v3
 
21
 
22
  jobs:
23
  sync-to-huggingface-space:
24
+ if: github.ref == 'refs/heads/master'
25
  runs-on: ubuntu-latest
26
  steps:
27
  - uses: actions/checkout@v3
app.py CHANGED
@@ -44,7 +44,7 @@ with col2:
44
  ###### ➠ If you want to translate the subtitles to English, select the task as "Translate"
45
  ###### I recommend starting with the base model and then experimenting with the larger models, the small and medium models often work well. """)
46
 
47
-
48
  def change_model(current_size, size):
49
  if current_size != size:
50
  loaded_model = whisper.load_model(size)
 
44
  ###### ➠ If you want to translate the subtitles to English, select the task as "Translate"
45
  ###### I recommend starting with the base model and then experimenting with the larger models, the small and medium models often work well. """)
46
 
47
+ @st.cache_resource
48
  def change_model(current_size, size):
49
  if current_size != size:
50
  loaded_model = whisper.load_model(size)