AnoushkaJain3 commited on
Commit
0702ff7
1 Parent(s): ce1a3f9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +12 -8
README.md CHANGED
@@ -31,26 +31,30 @@ There are two tutorial notebooks:
31
  2. load your sorting depending on the spike sorter you used to create the 'sorting' object
32
  3. Then you can create a Sorting_Analyzer object and you compute quality metrics.
33
 
34
- auto_label_units is the main in this notebook. Link to API to know the parameters:
35
  (https://spikeinterface--2918.org.readthedocs.build/en/2918/api.html#spikeinterface.curation.auto_label_units)
36
 
37
 
38
- ``` python
39
  from spikeinterface.curation import auto_label_units
40
 
41
  labels = auto_label_units(
42
  sorting_analyzer = sorting_analyzer,
43
  model_folder = “SpikeInterface/a_folder_for_a_model”,
44
  trusted = [‘numpy.dtype’])
45
- ```
 
46
 
47
 
48
- 3. Train_new_model.ipynb
49
 
50
  If you have your own manually curated data (e.g., from other species), this notebook allows you to train a new model using your specific data.
 
51
 
 
 
52
 
53
- ``` python
54
  from spikeinterface.curation.train_manual_curation import train_model
55
 
56
  trainer = train_model(mode = "analyzers",
@@ -59,10 +63,10 @@ There are two tutorial notebooks:
59
  output_folder = str(output_folder),
60
  imputation_strategies = None,
61
  scaling_techniques = None,
62
- classifiers = None, # Default to Random Forest only. Other classifiers you can try [ "AdaBoostClassifier", "GradientBoostingClassifier",
63
  # "LogisticRegression", "MLPClassifier", "XGBoost", "LightGBM", "CatBoost"]
64
- )
65
- ```
66
 
67
  Acknowledgments:
68
 
 
31
  2. load your sorting depending on the spike sorter you used to create the 'sorting' object
32
  3. Then you can create a Sorting_Analyzer object and you compute quality metrics.
33
 
34
+ auto_label_units is the main in this notebook. API link to know the parameters:
35
  (https://spikeinterface--2918.org.readthedocs.build/en/2918/api.html#spikeinterface.curation.auto_label_units)
36
 
37
 
38
+
39
  from spikeinterface.curation import auto_label_units
40
 
41
  labels = auto_label_units(
42
  sorting_analyzer = sorting_analyzer,
43
  model_folder = “SpikeInterface/a_folder_for_a_model”,
44
  trusted = [‘numpy.dtype’])
45
+
46
+
47
 
48
 
49
+ 2. Train_new_model.ipynb
50
 
51
  If you have your own manually curated data (e.g., from other species), this notebook allows you to train a new model using your specific data.
52
+ Here you need to follow the three steps mentioned before but you need to provide your manually curated labels.
53
 
54
+ train_model is the main function to do train your model. API Link:
55
+ https://spikeinterface--2918.org.readthedocs.build/en/2918/api.html#spikeinterface.curation.train_model
56
 
57
+
58
  from spikeinterface.curation.train_manual_curation import train_model
59
 
60
  trainer = train_model(mode = "analyzers",
 
63
  output_folder = str(output_folder),
64
  imputation_strategies = None,
65
  scaling_techniques = None,
66
+ classifiers = None) # Default to Random Forest only. Other classifiers you can try [ "AdaBoostClassifier", "GradientBoostingClassifier",
67
  # "LogisticRegression", "MLPClassifier", "XGBoost", "LightGBM", "CatBoost"]
68
+
69
+
70
 
71
  Acknowledgments:
72