Jensen-holm commited on
Commit
7855334
1 Parent(s): 84ec87c

better about and readme docs

Browse files
Files changed (2) hide show
  1. README.md +7 -1
  2. about_package.md +10 -4
README.md CHANGED
@@ -14,7 +14,13 @@ license: mit
14
  # Numpy-Neuron
15
 
16
  A small, simple neural network framework built using only [numpy](https://numpy.org) and python (duh).
17
- Here is an example of how to use the package for training a classifier.
 
 
 
 
 
 
18
 
19
  ```py
20
  from sklearn import datasets
 
14
  # Numpy-Neuron
15
 
16
  A small, simple neural network framework built using only [numpy](https://numpy.org) and python (duh).
17
+
18
+ ## Install
19
+
20
+ `pip install numpy_neuron`
21
+
22
+
23
+ ## Example
24
 
25
  ```py
26
  from sklearn import datasets
about_package.md CHANGED
@@ -1,7 +1,13 @@
1
  # Numpy-Neuron
2
 
3
  A small, simple neural network framework built using only [numpy](https://numpy.org) and python (duh).
4
- Here is an example of how to use the package for training a classifier.
 
 
 
 
 
 
5
 
6
  ```py
7
  from sklearn import datasets
@@ -71,11 +77,11 @@ if __name__ == "__main__":
71
  train_nn_classifier()
72
  ```
73
 
74
-
75
  ## Roadmap
76
 
77
  **Optimizers**
78
- I would love to add the ability to modify the learning rate over each epoch to ensure
79
- that the gradient descent algorithm does not get stuck in local minima as easily.
80
 
 
 
 
81
 
 
1
  # Numpy-Neuron
2
 
3
  A small, simple neural network framework built using only [numpy](https://numpy.org) and python (duh).
4
+
5
+ ## Install
6
+
7
+ `pip install numpy_neuron`
8
+
9
+
10
+ ## Example
11
 
12
  ```py
13
  from sklearn import datasets
 
77
  train_nn_classifier()
78
  ```
79
 
 
80
  ## Roadmap
81
 
82
  **Optimizers**
 
 
83
 
84
+ Currently the learning rate in a NN object is static during training. I would like
85
+ to work on developing at least the functionality for the Adam optimizer at some point.
86
+ This would help prevent getting stuck in local minima of the loss function.
87