File size: 1,024 Bytes
22aed50
 
 
cff79f1
 
22aed50
098d77a
 
c4357fa
 
098d77a
 
a2cbf5c
96f1fe3
a2cbf5c
 
 
 
 
22aed50
070d89b
22aed50
070d89b
22aed50
070d89b
22aed50
070d89b
22aed50
070d89b
22aed50
070d89b
22aed50
070d89b
22aed50
070d89b
22aed50
070d89b
22aed50
070d89b
 
046d3e3
070d89b
22aed50
070d89b
 
a2cbf5c
22aed50
070d89b
22aed50
070d89b
22aed50
070d89b
22aed50
070d89b
22aed50
070d89b
22aed50
070d89b
22aed50
070d89b
22aed50
070d89b
22aed50
070d89b
22aed50
070d89b
22aed50
070d89b
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
---
pipeline_tag: text-classification
---

## Usage

To use this model, please install BERTopic:

pip install bertopic

You can use the model as follows:

from bertopic import BERTopic 

topic_model = BERTopic.load("Alprocco/semi_supervised_bertopic") 

topic_model.get_topic_info()

## Topic overview

Number of topics: 30

Training hyperparameters

calculate_probabilities: False

language: multilingual

low_memory: False

min_topic_size: 10

n_gram_range: (1, 1)

nr_topics: 30

top_n_words: 10

verbose: True

Note: When saving the model, make sure to also keep track of the versions of dependencies and Python used. 
Loading and saving the model should be done using the same dependencies and Python. 

Moreover, models saved in one version of BERTopic are not guaranteed to load in other versions.

## Framework versions

bertopic 0.15.0

Numpy: 1.24.4

HDBSCAN: 0.8.33

UMAP: 0.5.4

Pandas: 2.0.3

Scikit-Learn: 1.0.2

Sentence-transformers: 2.2.2

Transformers: 4.33.2

Numba: 0.58.0

Plotly: 5.17.0

Python: 3.8.10