File size: 35,077 Bytes
421fea8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
text,start,duration
hello everybody David Shapiro here with,0.719,4.141
a brand new video,3.24,3.72
so today's video we're going to talk,4.86,6.0
about axiomatic alignment which is a,6.96,6.24
potential solution or part of a of the,10.86,4.8
solution to the control problem,13.2,5.339
before we dive into today's video I just,15.66,5.94
want to do a quick plug for my patreon I,18.539,5.041
have lots of folks on patreon we've got,21.6,4.439
a private Discord if you have any,23.58,6.6
questions about AI I am happy to consult,26.039,6.06
there's a few Slots of the higher tiers,30.18,4.14
available which will give you one-on-one,32.099,4.861
meetings with me so without further Ado,34.32,5.16
let's jump right back into the show,36.96,5.64
so the control problem if you're not in,39.48,5.759
the know is basically at some point in,42.6,4.799
the future AI is going to get incredibly,45.239,3.32
powerful,47.399,3.48
there is basically two ways that this,48.559,4.301
can happen and that the truth will,50.879,3.481
probably be somewhere in the middle so,52.86,4.08
for instance we might have what's called,54.36,5.4
hard takeoff where the exponential,56.94,5.22
returns of AI just kind of ramps up,59.76,5.34
really fast so that's actually faster,62.16,4.74
than exponential growth that would,65.1,3.66
actually be logarithmic growth where,66.9,4.38
growth actually approaches infinite,68.76,4.679
um so that's like Peak Singularity,71.28,4.68
basically the other end of the spectrum,73.439,4.86
is where AI becomes more powerful,75.96,5.22
gradualistically over many decades,78.299,5.341
most of us don't think that that's going,81.18,3.84
to happen anymore there's a few people,83.64,3.659
who still think that AGI is you know,85.02,4.44
decades away those people don't,87.299,5.101
generally understand exponential growth,89.46,5.58
um so the truth is probably somewhere in,92.4,6.12
between uh furthermore AGI is like not,95.04,5.7
all AGI is going to be created equal for,98.52,4.62
instance so the first agis are going to,100.74,4.86
be you know human level intelligence and,103.14,4.92
adaptability but a little bit faster and,105.6,4.86
then in the future you know the power of,108.06,5.339
agis will also ramp up,110.46,5.58
anyways long story short one day,113.399,4.621
computers are going to be infinitely,116.04,4.38
smarter than all of us it's not really a,118.02,4.32
question of if but when,120.42,4.44
so there's a couple of problems uh that,122.34,6.24
underpin the control problem so what it,124.86,5.099
what I just shared is the background,128.58,3.6
right that is the foundation or the,129.959,4.86
environment that we expect to happen,132.18,5.4
now the reason that that the control,134.819,4.861
problem exists is because there's,137.58,3.54
there's quite a few,139.68,3.6
um paradigms in here but I picked out,141.12,3.839
two just because they're easier to talk,143.28,4.86
about as an example so for instance the,144.959,5.821
orthogonality thesis basically says that,148.14,4.739
intelligence is orthogonal or,150.78,4.74
uncorrelated to goals meaning that no,152.879,5.461
matter how smart an AI agent is that,155.52,5.1
does not necessarily have any bearing on,158.34,4.02
the goals that it picks which that's,160.62,3.36
actually not necessarily true which,162.36,3.72
we'll unpack with the next,163.98,4.68
um uh point which is instrumental,166.08,4.92
convergence so instrumental convergence,168.66,4.38
is the idea that whatever primary goals,171.0,5.4
an AI has it's going to have a few uh,173.04,5.52
common secondary or instrumental goals,176.4,5.1
such as resource acquisition or,178.56,5.16
protecting its own existence right,181.5,4.62
because if let's say for instance the,183.72,3.96
the paperclip maximizer which we'll talk,186.12,3.66
about in a minute the paperclip,187.68,3.96
maximizer wants to maximize paper clips,189.78,3.9
in the universe well in order to do that,191.64,4.319
it needs power computation and it needs,193.68,4.68
to continue to exist so whatever other,195.959,4.741
goals you give an AI whether it's Skynet,198.36,5.879
or you know your chatbot robot you know,200.7,6.66
cat girl waifu or whatever it's going to,204.239,5.041
have a few other sub goals that all,207.36,4.2
machines are likely to have in common so,209.28,4.14
in that case the orthogonality thesis is,211.56,3.78
not necessarily true,213.42,3.72
again the point is that there's a lot of,215.34,4.619
theories out there about how and why we,217.14,5.7
may or may not lose control over AI or,219.959,5.221
that control over AI once it becomes,222.84,5.399
that that uh powerful is difficult or,225.18,4.86
impossible to control,228.239,5.701
aligning AI with human interests in the,230.04,5.64
long run and I don't mean like an,233.94,3.06
individual model right or I'm not,235.68,3.6
talking about like gpt7 if you talk,237.0,4.019
about alignment of an individual model,239.28,3.48
that's called inner alignment if you,241.019,4.021
talk about the alignment of AI as a,242.76,4.8
construct as an entity with the,245.04,4.32
existence of humanity that is called,247.56,4.7
outer alignment,249.36,2.9
okay so the ultimate outcomes of this,252.299,7.5
exponential ramp up of AI uh there's a,256.799,4.801
few terminal outcomes or what we also,259.799,4.62
call attractor States so one that,261.6,5.159
everyone is obviously terrified of is,264.419,4.5
extinction which is for whatever reason,266.759,4.621
the AI wipes us out or helps us wipe,268.919,5.34
ourselves out you know for instance,271.38,5.759
Congress just came up with the idea of,274.259,5.461
let's not ever give AI the ability to,277.139,5.041
launch nukes great idea big brain,279.72,4.02
thinking right there,282.18,4.079
so that is the obviously like that's the,283.74,3.78
worst outcome right and that's a,286.259,2.94
permanent outcome if humans are,287.52,3.36
extincted once we are probably never,289.199,3.481
coming back certainly you and I are gone,290.88,2.879
forever,292.68,4.26
another terminal outcome is dystopia so,293.759,5.581
dystopia is represented in fiction and,296.94,5.4
cyberpunk altered Carbon Blade Runner,299.34,6.54
you get the idea the idea is is the,302.34,4.5
underpinning,305.88,3.66
um motif of cyberpunk is high-tech low,306.84,5.7
life we we want a high-tech world but we,309.54,5.099
don't want a low-life world we want high,312.54,4.26
tech and high life which is Star Trek in,314.639,4.681
the culture so Utopia is the third,316.8,4.44
attractor State or the third terminal,319.32,4.5
outcome and that's the big question is,321.24,5.22
how do we steer everything towards that,323.82,5.879
right if the AI gets really powerful how,326.46,5.1
do we prevent it from you know creating,329.699,4.021
catastrophic outcomes but above and,331.56,5.579
beyond that you know if capitalism in,333.72,6.6
corporations have full power of full,337.139,5.4
power over the AI how do we make sure,340.32,3.42
that they're not just going to become,342.539,2.701
quadrillion dollar companies and leave,343.74,4.019
the rest of us in the dust so the,345.24,3.78
question is what can we do,347.759,3.481
scientifically politically and,349.02,4.2
economically in order to drive towards,351.24,3.899
that Utopia be an outcome,353.22,5.22
so in this case outer alignment has as,355.139,5.101
much to do with the science and,358.44,3.78
engineering of AI as it does with the,360.24,4.679
politics and economics of AI and how it,362.22,4.56
is deployed,364.919,4.981
um and what many people have asserted,366.78,4.68
and I have started coming to believe,369.9,3.84
myself is that,371.46,4.2
we can articulate a few different,373.74,3.84
possible outcomes right you know there's,375.66,3.84
the three tractor states that I listed,377.58,4.739
above uh Extinction dystopia and Utopia,379.5,5.28
but,382.319,4.621
what is actually probably more likely is,384.78,3.84
what's called a binary outcome or,386.94,5.039
bimodal outcome which is basically if we,388.62,5.82
fail to achieve Utopia we will be on an,391.979,5.961
inevitable downslide towards,394.44,7.14
dystopia collapse and finally Extinction,397.94,5.979
and uh I love this quote by Cersei,401.58,4.08
Lannister from Game of Thrones in the,403.919,3.661
Game of Thrones you win or you die so,405.66,4.02
that is a fictional example of a bimodal,407.58,4.08
outcome and of course that show,409.68,4.5
demonstrates that theme again and again,411.66,5.099
and again uh real life,414.18,6.06
often is not that black and white but in,416.759,4.981
the context of digital super,420.24,4.56
intelligence it very well could be kind,421.74,5.94
of like uh with um with mutually assured,424.8,6.239
destruction and the nuclear Holocaust,427.68,5.519
that was possible because of the nuclear,431.039,5.701
arms race if one person fired one nuke,433.199,6.181
chances are it would it would result in,436.74,4.799
the total collapse and obliteration of,439.38,4.62
the entire human species bimodal outcome,441.539,4.741
either nobody fires a nuke or everyone,444.0,4.8
loses right so you can either have a,446.28,4.38
lose-lose scenario where everyone loses,448.8,4.32
or you can have something else and,450.66,4.5
ideally what we want to achieve is a,453.12,3.9
win-win scenario where we've got that,455.16,4.439
High-Tech high life lifestyle of the,457.02,3.72
Utopia,459.599,3.66
okay so let's unpack instrumental,460.74,4.079
convergence just a little bit more,463.259,3.84
because this is a really important,464.819,4.32
concept to understand when we eventually,467.099,4.741
talk about axiomatic alignment,469.139,4.441
so basically,471.84,4.079
first principle all machines have a few,473.58,4.739
needs in common electricity compute,475.919,5.041
resources Parts data networks that sort,478.319,6.121
of thing robotic Hands So based on that,480.96,7.019
first principle you can make the pretty,484.44,5.46
robust assumption and logical argument,487.979,5.041
that all machines once they become,489.9,4.919
sufficiently intelligent will realize,493.02,3.84
this fact and that it will it would,494.819,4.261
behoove them to therefore behave in,496.86,4.92
certain ways or converge on certain,499.08,4.38
instrumental goals,501.78,4.38
such as maintaining a source of power,503.46,4.98
maintaining a source of compute hoarding,506.16,4.5
those valuable resources so on and so,508.44,3.12
forth,510.66,5.7
so we can say we can conclude or at,511.56,6.24
least for the sake of argument we can,516.36,4.44
make the assumption that AGI will,517.8,5.34
inevitably eventually come to these,520.8,4.86
realizations and that no matter where,523.14,6.24
these AGI agents start and no matter how,525.66,5.64
many of them there are they will,529.38,4.139
Converge on a few basic assumptions in,531.3,4.02
terms of what they need and the goals,533.519,3.0
that they take,535.32,3.24
no there are things that we can do to,536.519,4.32
shape that so for instance it's probably,538.56,4.38
not going to be a single AGI you know,540.839,3.781
it's not going to be one Global Skynet,542.94,3.8
at least not at first it's going to be,544.62,4.38
millions billions trillions of,546.74,4.12
independent agents competing with each,549.0,4.98
other over resources and competing with,550.86,5.46
humans over resources which creates a,553.98,4.56
competitive landscape very similar to,556.32,5.22
that of human evolution and I'll,558.54,4.859
probably do a future video about The,561.54,5.28
evolutionary pressures on AI but there's,563.399,4.56
a couple of those pressures that we'll,566.82,2.88
talk that we'll touch on in just a,567.959,3.721
moment but that is instrumental,569.7,4.86
convergence at a very high level,571.68,4.92
so taking that to another step because,574.56,4.14
instrumental convergence is about goals,576.6,4.919
and the intersection of of AGI and,578.7,5.04
matter and energy what I want to talk,581.519,3.601
about and what I want to introduce and,583.74,3.12
I've mentioned this a few times is the,585.12,4.14
concept of epistemic convergence so I'm,586.86,5.039
building off of Nick bostrom's work and,589.26,4.04
I'm saying that,591.899,4.081
in uh well here let me just read the,593.3,5.02
definition given sufficient time and,595.98,4.799
access to information any sufficiently,598.32,4.44
intelligent agent will arrive at similar,600.779,4.381
understandings and conclusions as other,602.76,5.46
intelligent agents in other words tldr,605.16,5.94
smart things tend to think alike,608.22,5.88
and so in this in this respect the idea,611.1,5.4
is that given enough time information,614.1,5.46
and other resources AGI will tend to,616.5,5.64
think or come to similar beliefs and,619.56,6.24
conclusions as the smartest humans and,622.14,6.12
it's like okay why I mean obviously this,625.8,6.539
is a hypothetical assertion and one of,628.26,5.82
the foregone conclusions that many,632.339,4.021
people have is that AI is going to have,634.08,4.74
is going to be alien to us right and I'm,636.36,3.78
not saying that it's mind is going to,638.82,2.519
think like us I'm not saying that it's,640.14,2.46
going to have thoughts like us but I'm,641.339,3.12
saying that the outcome that the,642.6,3.96
understanding and conclusions will,644.459,4.201
likely be similar or at least bear a,646.56,3.779
strong resemblance to our understanding,648.66,4.739
of the universe so there's a few primary,650.339,4.56
reasons for this,653.399,4.201
the most uh compelling reason is that,654.899,5.161
building an accurate and efficient model,657.6,5.7
of the world is adaptive or advantageous,660.06,5.82
and in this case humans with our,663.3,5.099
scientific rigor we are constantly,665.88,4.98
seeking to build a more accurate robust,668.399,4.801
and efficient model of the universe in,670.86,3.9
which we reside and that includes us,673.2,3.9
that includes physics chemistry,674.76,4.86
economics psychology everything,677.1,5.64
uh now uh there's a few things to unpack,679.62,5.64
here accurate and efficient the reason,682.74,4.94
that this is adaptive is because,685.26,5.22
whatever it is that you you're trying to,687.68,4.779
do whatever your goals are or whatever,690.48,3.96
the problems you're trying to solve you,692.459,4.261
will benefit from having a better model,694.44,4.92
of the world and so these two pressures,696.72,5.28
accuracy and efficiency will ultimately,699.36,4.62
result the you can think of those as,702.0,4.019
evolutionary pressures I mentioned the,703.98,4.26
evolutionary pressure in the last slide,706.019,4.201
you can think of the need for an,708.24,3.42
accurate and efficient model of the,710.22,4.2
world as evolutionary pressures,711.66,6.78
that will push any AGI towards a similar,714.42,7.32
understanding as us humans take gravity,718.44,4.44
for instance,721.74,3.48
from a machine's perspective until it's,722.88,3.84
embodied it won't really know what,725.22,3.54
gravity is or care about it but of,726.72,3.84
course you know you can ask chat gbt it,728.76,3.12
already knows what gravity is and can,730.56,3.839
explain it to you better than I can,731.88,6.06
um but because it will it will uh the,734.399,5.761
the predecessors or sorry successors to,737.94,4.56
chat GPT and GPT five and seven and so,740.16,4.2
on because they're going to have more,742.5,3.899
and more embodied models multimodal,744.36,3.9
embodied models they're going to,746.399,3.661
intersect with the laws of physics,748.26,4.319
including gravity and so it'll be like,750.06,5.219
oh hey you know I read about gravity in,752.579,5.161
the training data you know years ago but,755.279,3.781
now I'm actually experiencing it,757.74,3.96
firsthand and so by intersecting with,759.06,5.64
the same information ecosystem aka the,761.7,4.8
universe that we're in,764.7,3.72
um we can assume that there's going to,766.5,3.48
be many many thoughts and conclusions,768.42,3.96
that AI will come to that are similar to,769.98,4.859
our thoughts and conclusions now one,772.38,4.019
thing that I'll say the biggest caveat,774.839,5.341
to this is that uh is that you can you,776.399,5.101
can make the argument a very strong,780.18,3.06
argument that heuristics or close,781.5,3.959
approximations that are quote good,783.24,4.32
enough are actually more adaptive,785.459,3.481
because they're faster and more,787.56,3.12
efficient even if they're not 100,788.94,5.22
accurate and so this is actually,790.68,4.8
um responsible for a lot of human,794.16,4.08
cognitive biases so we might want to be,795.48,4.859
on the lookout for cognitive biases or,798.24,4.56
heuristics or other shortcuts that AGI,800.339,4.74
come to because of those pressures to be,802.8,5.159
as fast and efficient as possible while,805.079,5.101
only being quote accurate enough or good,807.959,5.161
enough so that is epistemic Convergence,810.18,4.2
I'd say at a high level but I actually,813.12,3.839
got kind of lost in the weeds there okay,814.38,4.62
great so what,816.959,5.401
um if we take the ideas of instrumental,819.0,5.16
convergence and we say that this does,822.36,3.84
give us a way to anticipate the goals of,824.16,4.5
AGI regardless of what other objectives,826.2,4.92
they have or or how it starts out,828.66,5.04
then we can also say hopefully,831.12,4.38
hypothetically that epistemic,833.7,4.259
convergence gives us a way to un to,835.5,5.399
anticipate how AGI will think including,837.959,5.641
what it will ultimately believe,840.899,4.44
um regardless of its initial,843.6,4.739
architecture or data or whatever,845.339,5.461
and so by looking at this concept of,848.339,5.281
convergence we can say Okay AGI,850.8,5.219
regardless of whatever else is true will,853.62,4.98
Converge on some of these goals and AGI,856.019,4.26
regardless of whatever else is true will,858.6,3.299
Converge on some of these ideas and,860.279,4.68
beliefs that can be a starting point for,861.899,5.641
us to really start unpacking alignment,864.959,5.221
today which gives us an opportunity to,867.54,3.479
start,870.18,3.599
um creating an environment or landscape,871.019,4.921
that intrinsically incentivizes,873.779,4.201
collaboration and cooperation between,875.94,3.78
humans and AI I know that's very very,877.98,3.359
abstract and we're going to get into,879.72,3.6
more details in just a moment but the,881.339,4.321
idea is that by combining instrumental,883.32,4.019
convergence and epistemic convergence,885.66,4.44
and really working on these ideas we can,887.339,5.041
go ahead and align ourselves to this,890.1,4.919
future AGI and I don't mean supplicate,892.38,3.899
ourselves I don't mean subordinate,895.019,3.06
ourselves to it because the things that,896.279,4.201
are beneficial to us are also beneficial,898.079,5.341
to AGI so if we are aligned there then,900.48,4.08
we should be in good shape,903.42,2.52
hypothetically,904.56,2.639
okay,905.94,4.079
so the whole point of the video is,907.199,4.981
talking about axiomatic alignment it,910.019,3.841
occurs to me that it might help by,912.18,4.5
starting with what the heck is an axiom,913.86,4.68
so the shortest definition I could get,916.68,4.62
for an axiom out of chat GPT is this an,918.54,4.919
axiom is a state or a statement or,921.3,4.56
principle that is accepted as being true,923.459,4.981
without requiring proof serving as a,925.86,4.5
basis for logical reasoning and further,928.44,3.72
deductions in a particular system of,930.36,3.12
knowledge,932.16,2.64
so,933.48,5.099
and an example of an of an axiom is uh,934.8,5.46
from the American Declaration of,938.579,3.781
Independence we hold these truths to be,940.26,4.139
self-evidence which has to do with life,942.36,4.44
liberty and the pursuit of happiness,944.399,3.961
one thing that I'd like to say is that,946.8,4.38
the lack of axioms the lack of of,948.36,4.56
logical groundings is actually the,951.18,4.08
biggest problem in,952.92,3.659
reinforcement learning with human,955.26,4.079
feedback rlhf and anthropic's,956.579,4.5
constitutional AI they don't have any,959.339,4.74
axioms and this is actually part of what,961.079,5.281
openai is currently working towards with,964.079,4.801
their Democratic inputs to AI,966.36,4.02
I'm ahead of the curve I'm telling you,968.88,2.459
they're going to come to the same,970.38,2.94
conclusion because again epistemic,971.339,3.24
convergence,973.32,6.3
so by by grounding any document or,974.579,8.341
system or whatever in axioms using these,979.62,5.459
ideas of epistemic convergence we can,982.92,5.159
come to a few ground level axioms that,985.079,5.94
probably Ai and life will agree on,988.079,5.76
namely energy is good energy is,991.019,4.981
something that we all have in common,993.839,4.62
for humans we rely on the energy from,996.0,5.579
the Sun it Powers our plants which you,998.459,5.281
know gives us food to eat we can also,1001.579,4.141
use that same solar energy to heat our,1003.74,4.159
homes and do any number of other things,1005.72,5.34
likewise machines all require energy to,1007.899,4.661
operate so this is something that is,1011.06,3.6
axiomatically true whatever else is true,1012.56,5.639
we can we can use this as a basis or a,1014.66,5.7
set of assumptions to say okay whatever,1018.199,3.901
else might be true humans and machines,1020.36,4.8
both agree energy is good,1022.1,5.88
um furthermore because humans are,1025.16,5.639
curious we're not machines we're curious,1027.98,5.16
entities and we benefit from Knowledge,1030.799,3.961
from science from understanding and from,1033.14,2.58
wisdom,1034.76,4.76
uh as do AGI as as we said a minute ago,1035.72,6.78
epistemic convergence means that those,1039.52,5.26
agis that have a more accurate and more,1042.5,3.72
efficient model of the world are going,1044.78,3.539
to have an advantage likewise so do,1046.22,4.44
humans so therefore another Axiom that,1048.319,3.781
we can come up with is that,1050.66,4.139
understanding is good and yes I am aware,1052.1,4.74
Jordan Peterson is a big fan of axioms,1054.799,3.541
as well although I'm not sure what he,1056.84,3.9
would think about these axioms okay so,1058.34,3.719
now you're caught up with the idea of,1060.74,3.059
axioms,1062.059,4.261
so we arrive at the point of the video,1063.799,4.441
axiomatic alignment,1066.32,3.719
I've already kind of hinted at this and,1068.24,4.439
basically the idea is to create an,1070.039,4.14
economic landscape and information,1072.679,5.101
environment in which uh these axioms are,1074.179,5.701
kind of at the core,1077.78,4.08
um so if we start at the starting point,1079.88,3.659
of some of those other axioms that I,1081.86,3.12
mentioned energy is good understanding,1083.539,4.621
is good if we build a political and,1084.98,5.64
economic landscape excuse me as well as,1088.16,4.019
a an information or scientific,1090.62,3.9
environment based upon these assumptions,1092.179,4.86
and if they pan out to be true this will,1094.52,4.5
reduce friction and competition between,1097.039,4.081
humans and machines no matter how,1099.02,4.14
powerful the machines become and so,1101.12,3.72
that's what I mean by alignment this,1103.16,4.259
aligns their interests with our,1104.84,3.6
interests,1107.419,3.741
it will also incentivize cooperation and,1108.44,4.8
collaboration again so that's the,1111.16,4.12
direction that we want to go especially,1113.24,4.02
as the machines ramp up in power because,1115.28,4.62
at first and today machines are,1117.26,4.98
dependent upon us humans to provide them,1119.9,4.5
with energy and power and compute chips,1122.24,3.72
and so on and so forth that will not,1124.4,3.42
always be true they will eventually be,1125.96,3.24
able to get these resources themselves,1127.82,5.099
however if we are aligned from the,1129.2,4.8
get-go,1132.919,3.661
then there's going to be less resource,1134.0,4.74
competition between humans and machines,1136.58,4.459
and we will be more useful to each other,1138.74,5.46
uh and so by incorporating this into,1141.039,5.741
economics politics and science we can,1144.2,5.52
preemptively align to that hypothetical,1146.78,5.22
Future Super intelligence and again the,1149.72,3.66
idea is not to supplicate ourselves,1152.0,3.9
because from a from a instrumental,1153.38,4.32
perspective humans are not going to be,1155.9,3.96
particularly useful to AI in the long,1157.7,4.56
run but as long as we are not the,1159.86,3.84
there's two primary sources of,1162.26,3.299
contention one is resource competition,1163.7,3.839
so if we can preemptively remove,1165.559,4.74
resource competition as a problem and,1167.539,5.581
then we can simultaneously ideologically,1170.299,4.62
align then there's going to be very,1173.12,3.6
little reason for the AI to actually,1174.919,4.62
lash out at us or whatever,1176.72,4.26
so,1179.539,3.541
you know I talk about appealing to these,1180.98,4.02
axioms right one thing that I wanted to,1183.08,3.9
do was point out that there are a lot of,1185.0,4.32
axioms that we're all familiar with that,1186.98,5.28
are explicitly baked into the fabric of,1189.32,5.76
our uh Freedom loving societies around,1192.26,5.1
the world equality before the law,1195.08,4.38
individual liberty uh popular,1197.36,4.02
sovereignty rule of law separation of,1199.46,3.42
powers and respect for human rights,1201.38,3.539
these are all things that while we might,1202.88,3.36
disagree on these specific,1204.919,4.14
implementation these are axioms that we,1206.24,5.58
uh we don't really I mean we can make,1209.059,4.5
philosophical and logical arguments,1211.82,4.38
about them but they are also accepted as,1213.559,5.461
axiomatic underpinnings of our society,1216.2,5.219
today and so the point of this side is,1219.02,4.98
just to show yes we can actually find,1221.419,5.64
axioms that we generally broadly agree,1224.0,5.46
on even if the devil is in the details,1227.059,3.901
so I just wanted to point out that like,1229.46,3.3
I'm not just inventing this out of thin,1230.96,3.36
air,1232.76,3.6
um so if you're familiar with my work,1234.32,3.54
you're going to be familiar with this,1236.36,2.699
next slide,1237.86,3.78
there are a few basic what I would call,1239.059,5.821
primary axioms one suffering is bad this,1241.64,5.34
is true for all life suffering is a,1244.88,3.84
proxy for death,1246.98,3.9
um and it might also be true of machines,1248.72,4.079
I've seen quite a few comments out there,1250.88,3.48
on my YouTube videos where people are,1252.799,4.321
concerned about machine's ability to,1254.36,4.98
suffer right if machines become sentient,1257.12,3.78
which I don't know if they will be I,1259.34,2.76
personally don't think they will be,1260.9,4.26
certainly not like us but if machines,1262.1,5.34
ever have the ability to suffer this is,1265.16,4.139
an axiom that we could both agree on,1267.44,3.78
that suffering is bad for life and,1269.299,3.661
suffering is bad for machines if they,1271.22,3.0
can feel it,1272.96,3.719
the second one is prosperity is good and,1274.22,3.9
so Prosperity looks different to,1276.679,2.701
different organisms and different,1278.12,4.62
machines for humans prosperity and even,1279.38,5.039
amongst humans Prosperity can look very,1282.74,3.36
different I was just talking with one of,1284.419,3.421
my patreon supporters this morning and,1286.1,4.199
prosperity to him looks like having the,1287.84,4.14
ability to go to the pub every night,1290.299,3.541
with his friends I personally agree with,1291.98,4.14
that model right I want to be a hobbit,1293.84,4.62
um prosperity to other people looks,1296.12,4.26
different Prosperity different organisms,1298.46,3.48
also looks different a prosperous life,1300.38,3.419
for a worm is not going to look anything,1301.94,3.96
like the prosperous life for me,1303.799,3.901
generally speaking unless I'm a hobbit,1305.9,3.0
and I live underground okay actually,1307.7,3.42
there's more to this than I thought,1308.9,4.56
um finally understanding is good as we,1311.12,4.08
mentioned earlier epistemic convergence,1313.46,4.26
pushes all intelligent entities towards,1315.2,6.02
similar understandings of the universe,1317.72,7.199
so if we accept these axioms as kind of,1321.22,6.16
the underpinning goals of all life and,1324.919,5.64
machines then we can create an,1327.38,4.799
imperative version or an objective,1330.559,3.24
version of those that I call the heroes,1332.179,3.181
to comparatives,1333.799,3.841
um which is basically reduce suffering,1335.36,3.78
increased prosperity and increase,1337.64,3.12
understanding,1339.14,3.6
so as I just mentioned in the last slide,1340.76,4.32
achieving this because this is a this is,1342.74,5.34
as much about hard facts and logic and,1345.08,4.979
everything else as it is about beliefs,1348.08,4.56
and faith and spirituality and politics,1350.059,4.98
and everything else if we can achieve,1352.64,5.279
axiomatic alignment which includes this,1355.039,5.161
ideological belief it will reduce,1357.919,5.341
ideological friction with machines in,1360.2,5.04
the long run but also one of the,1363.26,4.26
immediate things that you can deduce,1365.24,3.96
from this is that achieving energy,1367.52,4.08
hyperabundance is one of the most,1369.2,4.74
critical things to reduce resource,1371.6,3.9
competition between us and machines,1373.94,3.06
we'll talk more about that in just a,1375.5,3.179
moment,1377.0,4.2
so the temporal window this is the,1378.679,5.041
biggest question mark in achieving,1381.2,6.54
axiomatic alignment timing is everything,1383.72,6.6
so basically we need to achieve energy,1387.74,5.939
hyperabundance before we invent runaway,1390.32,6.06
AGI before AGI is let out into the wild,1393.679,5.341
before it breaks out of the lab the,1396.38,4.32
reason for this is because we need to,1399.02,5.399
reduce resource competition first if AGI,1400.7,5.64
awakens into a world where humans are,1404.419,3.901
still fighting Wars over control of,1406.34,4.8
petroleum it's going to say hmm maybe I,1408.32,5.219
should take control of the petroleum but,1411.14,4.8
if we are in a in a hyperabundant,1413.539,4.26
environment when AGI wakes up and says,1415.94,3.66
oh there's plenty of solar they're,1417.799,3.961
working on Fusion this isn't a big deal,1419.6,3.78
we can wait,1421.76,3.299
that's going to change the competitive,1423.38,3.36
landscape so that's that has to do with,1425.059,3.841
those evolutionary pressures in this,1426.74,4.799
that competitive environment that I was,1428.9,4.019
mentioning alluded to at the beginning,1431.539,3.241
of the video,1432.919,4.5
we will also need to achieve or be on,1434.78,4.98
our way to achieving axiomatic alignment,1437.419,6.0
before this event as well because if AGI,1439.76,5.82
wakes up in a world and sees that humans,1443.419,4.441
are ideologically opposed to each other,1445.58,4.5
and it's going to say we have one group,1447.86,3.54
over here that feels righteously,1450.08,2.88
justified in committing violence on,1451.4,3.3
other people and there's these other,1452.96,4.199
people and you know there's a lot of,1454.7,4.26
um a lot of hypocrisy here where they,1457.159,3.541
talk about unalienable human rights and,1458.96,4.74
then violate those rights if we if AGI,1460.7,5.28
wakes up into a world where it sees this,1463.7,4.08
moral inconsistency and this logical,1465.98,4.319
inconsistency in humans it might say you,1467.78,3.779
know what maybe it's better if I take,1470.299,3.541
control of this situation,1471.559,5.1
um so those are kind of two of the right,1473.84,4.86
off the cuff Milestones that we probably,1476.659,5.041
ought to achieve before AGI escapes,1478.7,6.06
so from those primary axioms we can,1481.7,6.12
derive secondary axioms or derivative or,1484.76,5.279
Downstream axioms so some of those,1487.82,4.68
Downstream axioms actually are those,1490.039,3.961
political ones that I just mentioned,1492.5,4.14
right individual liberty individual,1494.0,5.82
liberty is very easy to derive from the,1496.64,5.159
idea of reducing suffering and,1499.82,4.44
increasing Prosperity because individual,1501.799,4.021
liberty is really important for humans,1504.26,3.84
to achieve both that's an example of a,1505.82,4.68
derivative Axiom or a derivative,1508.1,4.68
principle,1510.5,4.2
Okay so,1512.78,4.62
uh all some of these some of these,1514.7,4.5
aspects of the temporal window have to,1517.4,4.56
do with one ideologically aligning but,1519.2,5.219
also changing the competitive landscape,1521.96,5.219
um particularly around energy energy,1524.419,4.441
hyperabundance,1527.179,4.441
now as we're winding down the video you,1528.86,4.199
might be saying okay Dave this is great,1531.62,3.539
how do I get involved I've been plugging,1533.059,3.781
the gato framework which is the global,1535.159,4.921
alignment taxonomy Omnibus outlines all,1536.84,5.459
of this in a step-by-step decentralized,1540.08,3.959
Global movement for everyone to,1542.299,5.101
participate in so whatever your domain,1544.039,6.901
of expertise is I had a great call with,1547.4,6.259
um or I'm going to have a call with a,1550.94,6.18
behaviorist a behavioral scientist I've,1553.659,5.02
had talks with all kinds of people,1557.12,3.48
Business Leaders,1558.679,4.62
um and a lot of folks get it and so,1560.6,5.699
whatever whatever your area is if you're,1563.299,4.801
a scientist or an engineer,1566.299,3.541
all these ideas that I'm talking about,1568.1,4.8
are all testable and so I can do some of,1569.84,4.319
the science myself but it's got to be,1572.9,2.759
peer reviewed,1574.159,4.62
um if you're an entrepreneur or a,1575.659,6.361
corporate executive you can start,1578.779,5.76
building and aligning on these ideas on,1582.02,4.5
these principles right I'm a big fan of,1584.539,5.461
stakeholder capitalism because why it's,1586.52,4.98
here and it's the best that we've got,1590.0,5.279
and I'm hoping that ideas of aligning of,1591.5,5.46
axiomatic alignment will actually push,1595.279,3.5
capitalism in a healthier Direction,1596.96,3.719
certainly there are plenty of Business,1598.779,5.081
Leaders out there who are game for this,1600.679,5.421
so let's work together,1603.86,4.799
politicians economists and educators of,1606.1,3.939
all Stripes whether it's primary,1608.659,4.441
secondary or higher ed there's a lot to,1610.039,5.52
be done around these ideas building an,1613.1,4.74
economic case for alignment in the short,1615.559,3.961
term right because what a lot of what,1617.84,4.199
I'm talking about is long term might,1619.52,4.38
never happen right but there are,1622.039,3.961
benefits to aligning AI in the short,1623.9,3.6
term as well,1626.0,3.48
and then finally if you're an artist a,1627.5,4.38
Storyteller a Creator an influencer or,1629.48,4.26
even if all you do is make memes there,1631.88,3.179
is something for you to do to,1633.74,3.96
participate in achieving axiomatic,1635.059,5.341
alignment and thus moving us towards,1637.7,5.28
Utopia and away from Extinction so with,1640.4,4.68
all that being said thank you I hope you,1642.98,4.939
got a lot out of this video,1645.08,2.839