AnoushkaJain3 commited on
Commit
ddc46cf
1 Parent(s): 0f38707

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +43 -4
README.md CHANGED
@@ -6,10 +6,49 @@ tags:
6
  - biology
7
  ---
8
 
9
- Use these models if you are using Neuropixels on mice.
10
- The model is trained on 11 mice in V1, SC and ALM. Each recording was labelled by at least two people, and in different combinations.
11
- The agreement amongst labellers is 80%
 
 
 
 
12
 
13
  There are two tutorial notebooks:
 
 
14
  1. Model_based_curation.ipynb
15
- 2. Train_new_model.ipynb
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6
  - biology
7
  ---
8
 
9
+ To reduce the effort in manual curation, we developed a machine learning approach using Neuropixels probes, incorporating quality metrics to automatically identify noise clusters
10
+ and isolate single-cell activity.
11
+ Compatible with the Spikeinterface API, our method generalizes across various probes and speices.
12
+
13
+
14
+ The we generated a machine learning model that is trained on 11 mice in V1, SC and ALM using Neuropixels on mice. Each recording was labelled by at least two people, and in different combinations.
15
+ The agreement amongst labellers is 80%. You can use the models "noise_meuron_model.skops" that is used to identify noise and "sua_mua_model.skops" that is used to isolate SUA.
16
 
17
  There are two tutorial notebooks:
18
+
19
+
20
  1. Model_based_curation.ipynb
21
+
22
+ If you already have fitted models, you can use this notebook to predict on new recordings.
23
+
24
+
25
+ ``` python
26
+ from spikeinterface.curation import auto_label_units
27
+ labels = auto_label_units(
28
+ sorting_analyzer = sorting_analyzer,
29
+ model_folder = “SpikeInterface/a_folder_for_a_model”,
30
+ trusted = [‘numpy.dtype’]
31
+ )
32
+ ```
33
+ 3. Train_new_model.ipynb
34
+
35
+ If you want to create a model based on your own manually curated.
36
+
37
+ ``` python
38
+ from spikeinterface.curation.train_manual_curation import train_model
39
+
40
+ trainer = train_model(mode = "analyzers",
41
+ labels = labels,
42
+ analyzers = [labelled_analyzer, labelled_analyzer],
43
+ output_folder = str(output_folder),
44
+ imputation_strategies = None,
45
+ scaling_techniques = None,
46
+ classifiers = None, # Default to Random Forest only. Other classifiers you can try [ "AdaBoostClassifier","GradientBoostingClassifier",
47
+ # "LogisticRegression","MLPClassifier"]
48
+ )
49
+
50
+ best_model = trainer.best_pipeline
51
+ best_model]
52
+ )
53
+ ```
54
+