MilesCranmer commited on
Commit
1f1f9b0
1 Parent(s): 82e182e

Update todo

Browse files
Files changed (3) hide show
  1. README.md +15 -13
  2. eureqa.jl +2 -2
  3. paralleleureqa.jl +1 -1
README.md CHANGED
@@ -43,10 +43,11 @@ const binops = [plus, mult, pow]
43
  You can change the dataset here:
44
  ```julia
45
  const X = convert(Array{Float32, 2}, randn(100, 5)*2)
46
- # Here is the function we want to learn (x2^2 + cos(x3))
47
- const y = convert(Array{Float32, 1}, ((cx,)->cx^2).(X[:, 2]) + cos.(X[:, 3]))
48
  ```
49
  by either loading in a dataset, or modifying the definition of `y`.
 
50
 
51
  ### Hyperparameters
52
 
@@ -66,27 +67,23 @@ const alpha = 10.0
66
  ```
67
  Larger alpha means more exploration.
68
 
69
- One can also adjust the relative probabilities of each mutation here:
70
  ```julia
71
- weights = [8, 1, 1, 1, 2]
72
  ```
73
  (for: 1. perturb constant, 2. mutate operator,
74
- 3. append a node, 4. delete a subtree, 5. do nothing).
 
75
 
76
 
77
  # TODO
78
 
 
 
79
  - [ ] Create a Python interface
80
- - [x] Create a benchmark for speed
81
  - [ ] Create a benchmark for accuracy
82
- - [x] Record hall of fame
83
- - [x] Optionally (with hyperparameter) migrate the hall of fame, rather than current bests
84
- - [x] Test performance of reduced precision integers
85
- - No effect
86
  - [ ] Create struct to pass through all hyperparameters, instead of treating as constants
87
  - Make sure doesn't affect performance
88
- - [ ] Hyperparameter tune
89
- - [ ] Simplify subtrees with only constants beneath them. Or should I? Maybe randomly simplify sometimes?
90
  - [ ] Use NN to generate weights over all probability distribution, and train on some randomly-generated equations
91
  - [ ] Performance:
92
  - [ ] Use an enum for functions instead of storing them?
@@ -95,4 +92,9 @@ weights = [8, 1, 1, 1, 2]
95
  - Seems like its necessary right now. But still by far the slowest option.
96
  - [ ] Calculating the loss function - there is duplicate calculations happening.
97
  - [ ] Declaration of the weights array every iteration
98
-
 
 
 
 
 
 
43
  You can change the dataset here:
44
  ```julia
45
  const X = convert(Array{Float32, 2}, randn(100, 5)*2)
46
+ # Here is the function we want to learn (x2^2 + cos(x3) - 5)
47
+ const y = convert(Array{Float32, 1}, ((cx,)->cx^2).(X[:, 2]) + cos.(X[:, 3]) .- 5)
48
  ```
49
  by either loading in a dataset, or modifying the definition of `y`.
50
+ (The `.` are are used for vectorization of a scalar function)
51
 
52
  ### Hyperparameters
53
 
 
67
  ```
68
  Larger alpha means more exploration.
69
 
70
+ One can also adjust the relative probabilities of each operation here:
71
  ```julia
72
+ weights = [8, 1, 1, 1, 0.1, 2]
73
  ```
74
  (for: 1. perturb constant, 2. mutate operator,
75
+ 3. append a node, 4. delete a subtree, 5. simplify equation,
76
+ 6. do nothing).
77
 
78
 
79
  # TODO
80
 
81
+ - [ ] Explicit constant operation on hall-of-fame
82
+ - [ ] Hyperparameter tune
83
  - [ ] Create a Python interface
 
84
  - [ ] Create a benchmark for accuracy
 
 
 
 
85
  - [ ] Create struct to pass through all hyperparameters, instead of treating as constants
86
  - Make sure doesn't affect performance
 
 
87
  - [ ] Use NN to generate weights over all probability distribution, and train on some randomly-generated equations
88
  - [ ] Performance:
89
  - [ ] Use an enum for functions instead of storing them?
 
92
  - Seems like its necessary right now. But still by far the slowest option.
93
  - [ ] Calculating the loss function - there is duplicate calculations happening.
94
  - [ ] Declaration of the weights array every iteration
95
+ - [x] Create a benchmark for speed
96
+ - [x] Simplify subtrees with only constants beneath them. Or should I? Maybe randomly simplify sometimes?
97
+ - [x] Record hall of fame
98
+ - [x] Optionally (with hyperparameter) migrate the hall of fame, rather than current bests
99
+ - [x] Test performance of reduced precision integers
100
+ - No effect
eureqa.jl CHANGED
@@ -17,7 +17,7 @@ const ns=10;
17
  ##########################
18
  # # Dataset to learn
19
  const X = convert(Array{Float32, 2}, randn(100, 5)*2)
20
- const y = convert(Array{Float32, 1}, ((cx,)->cx^2).(X[:, 2]) + cos.(X[:, 3]))
21
  ##########################
22
 
23
  ##################
@@ -364,7 +364,7 @@ function iterate(
364
 
365
  mutationChoice = rand()
366
  weight_for_constant = min(8, countConstants(tree))
367
- weights = [weight_for_constant, 1, 1, 1, 1, 2] .* 1.0
368
  weights /= sum(weights)
369
  cweights = cumsum(weights)
370
  n = countNodes(tree)
 
17
  ##########################
18
  # # Dataset to learn
19
  const X = convert(Array{Float32, 2}, randn(100, 5)*2)
20
+ const y = convert(Array{Float32, 1}, ((cx,)->cx^2).(X[:, 2]) + cos.(X[:, 3]) .- 5)
21
  ##########################
22
 
23
  ##################
 
364
 
365
  mutationChoice = rand()
366
  weight_for_constant = min(8, countConstants(tree))
367
+ weights = [weight_for_constant, 1, 1, 1, 0.1, 2] .* 1.0
368
  weights /= sum(weights)
369
  cweights = cumsum(weights)
370
  n = countNodes(tree)
paralleleureqa.jl CHANGED
@@ -3,7 +3,7 @@ include("eureqa.jl")
3
  const nthreads = Threads.nthreads()
4
  const migration = true
5
  const hofMigration = true
6
- const fractionReplacedHof = 0.05f0
7
 
8
  # List of the best members seen all time
9
  mutable struct HallOfFame
 
3
  const nthreads = Threads.nthreads()
4
  const migration = true
5
  const hofMigration = true
6
+ const fractionReplacedHof = 0.1f0
7
 
8
  # List of the best members seen all time
9
  mutable struct HallOfFame