File size: 40,329 Bytes
421fea8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
text,start,duration
hey everyone David Shapiro here with,1.319,5.161
today's video good morning,4.2,4.439
today's topic is going to be a little,6.48,4.199
bit more severe and a little bit more,8.639,4.981
intense so the title of today's video is,10.679,6.721
Agi Unleashed what happens when we lose,13.62,5.999
control,17.4,5.58
part one the coming storm,19.619,5.521
as everyone is aware things are ramping,22.98,4.379
up very quickly there are people calling,25.14,5.639
for moratoriums on AI research and while,27.359,5.641
some of us don't take it seriously there,30.779,4.321
are very legitimate concerns about,33.0,3.6
what's going on,35.1,4.139
and in other words we're in the end game,36.6,5.22
now we are in the ramp up towards AGI,39.239,5.521
and the singularity whether or not you,41.82,5.579
are emotionally ready for it,44.76,6.299
so just as a tiny bit of evidence this,47.399,5.961
paper from nature,51.059,4.441
demonstrates that we are in an,53.36,4.96
exponential ramp up on AI whatever else,55.5,5.34
is true the investment is there the,58.32,4.919
research is there it's happening and,60.84,5.22
it's not slowing down,63.239,5.401
now I'm probably preaching to the choir,66.06,4.44
and rehashing old,68.64,4.56
stuff that everyone knows but let's just,70.5,4.799
briefly talk about the existential risks,73.2,5.459
there are two overarching themes or,75.299,5.46
categories of the existential risks,78.659,7.441
risks for AGI one is the basically just,80.759,8.301
the deliberate weaponization of AI,86.1,6.839
some people are comparing AGI to nuclear,89.06,6.4
weapons beyond that there's the,92.939,5.581
potential of cyber warfare drone Warfare,95.46,8.24
autonomous uh tanks artillery aircraft,98.52,8.279
submarines so on and so forth many of,103.7,4.959
these systems are already being,106.799,3.86
developed and deployed,108.659,5.701
so basically ai ai is already being,110.659,5.621
weaponized it's just a matter of on what,114.36,2.88
scale,116.28,3.9
moreover the second category is,117.24,5.22
accidental outcomes or accident or,120.18,4.979
unintended consequences basically what,122.46,5.7
if the AGI escapes but there's a few,125.159,5.341
other possible avenues for this to,128.16,4.68
happen for instance runaway corporate,130.5,3.36
greed,132.84,2.94
you bet your bottom dollar that the,133.86,4.2
first corporation that can create AGI to,135.78,4.86
automate profits is going to do so,138.06,5.759
uh more uh beyond that there's political,140.64,4.8
corruption and just straight up,143.819,3.841
political incompetence this is something,145.44,3.84
that has been uh discussed actually,147.66,4.2
quite a bit in uh my comments section,149.28,4.76
for my most recent videos which is that,151.86,8.04
uh yes uh like Italy uh uh uh Britain,154.04,8.26
and a few other places are doing their,159.9,4.32
best to kind of get ahead of the curve,162.3,4.62
but the fact of the matter is is that,164.22,4.799
governments by and large are moving too,166.92,4.74
slow and politicians just don't get it,169.019,5.281
and then finally you can even have,171.66,6.0
situations where otherwise benign agis,174.3,6.18
collaborate and eventually turn on us,177.66,5.52
so the existential risks are there,180.48,5.46
and you guys most most of you know me I,183.18,4.62
am very optimistic I'm very sanguine,185.94,3.84
about all this and by the end of the,187.8,4.439
video I hope that many of you will see,189.78,4.739
where I'm coming from,192.239,5.881
now even if AGI doesn't wipe us out,194.519,7.14
there are still lots and lots of risks,198.12,5.759
um job loss economic changes social,201.659,5.16
upheaval so on and so forth mostly it,203.879,6.601
comes down to economic shifts who gets,206.819,5.761
to capture all of the tremendous wealth,210.48,5.58
generated by AGI and even if AGI is safe,212.58,5.519
and doesn't kill us there's still every,216.06,3.78
possibility that the corporatists and,218.099,3.841
capitalists will have us living in a,219.84,4.56
dystopian hell before too long,221.94,5.46
so that is also an intrinsic risk even,224.4,5.759
if it is not an existential risk so we,227.4,4.32
really need to be talking about what,230.159,3.72
happens when we get these super powerful,231.72,4.76
agis,233.879,2.601
part two autonomous AI,236.84,7.06
AGI is kind of a loaded term there's a,240.12,5.1
lot of,243.9,4.38
debate over what AGI even means so this,245.22,4.92
is why I have started saying autonomous,248.28,4.319
AI I don't care how smart it is,250.14,5.459
the the point being is that AI becoming,252.599,5.04
autonomous is really what we're talking,255.599,5.941
about and as we talk more about what is,257.639,6.601
the path to AGI autonomy is rapidly,261.54,6.18
becoming part of the conversation now,264.24,5.82
that being said there is a huge,267.72,4.68
disconnect between what,270.06,5.4
us proletariat are talking about and,272.4,6.54
what is being released from the halls of,275.46,5.64
power and what the academic,278.94,4.14
establishment is talking about,281.1,4.68
there is a lot of gatekeeping and a lot,283.08,5.52
of this is cloistered and that I think,285.78,4.74
is probably one of the biggest problems,288.6,3.9
which is honestly why I'm making these,290.52,4.22
videos,292.5,2.24
autonomous AI is coming whether or not,294.96,5.22
you like it whether or not you're,297.96,4.5
emotionally ready for it autonomous AI,300.18,4.56
is easier to make than you think people,302.46,3.959
are starting to realize this especially,304.74,4.38
with the release of chat GPT plugins and,306.419,4.381
the ability for language models to use,309.12,3.859
apis,310.8,5.88
to be fair the Department of Defense and,312.979,5.741
universities and tech companies are all,316.68,5.04
working on autonomous AI systems they,318.72,4.94
are kind of cloistering and and and,321.72,4.319
closing their research it's not as open,323.66,6.16
as it once was and in part I I totally,326.039,6.181
understand that because I think that,329.82,4.56
some of these folks realize how close we,332.22,3.0
are,334.38,3.659
and that scares them and so they're,335.22,4.62
they're basically playing the CL playing,338.039,4.741
the cards very close to their vest until,339.84,5.699
they get a better read on the situation,342.78,5.46
now taken one step further,345.539,5.641
or or looking at at this problem no one,348.24,5.04
has actually proposed a comprehensive,351.18,4.44
framework uh there's been plenty of,353.28,4.28
books on it there's been a few papers,355.62,5.82
but there's not really anything that is,357.56,7.18
a fully realized solution and I don't in,361.44,5.28
my opinion nobody has even fully,364.74,4.98
articulated the danger yet what's going,366.72,5.759
on and I'm hoping that this video will,369.72,4.44
advance that conversation just a little,372.479,3.481
bit,374.16,3.78
part of what happens in this comment,375.96,3.9
comes up on my videos a lot is a lot of,377.94,3.479
people are afraid to even talk about,379.86,4.02
this out of fear of ridicule,381.419,4.021
those of us that are paying attention,383.88,3.06
those of us that are doing the work,385.44,3.84
those of us that are working on it we,386.94,4.979
all see what's possible but Society by,389.28,4.56
and large is not ready for it they're,391.919,4.381
not ready for the conversation and so,393.84,4.32
there's a lot of ridicule whether it's,396.3,4.26
in Private Industry whether it's in the,398.16,4.68
halls of Academia government military,400.56,3.68
and so on,402.84,5.34
uh so this basically comes down to,404.24,5.32
something that I call institutional,408.18,3.359
codependence which is that the,409.56,3.24
establishment believes that the,411.539,3.061
establishment is always right and the,412.8,4.519
establishment will use any tactic attack,414.6,6.06
tactic or technique sorry uh to control,417.319,6.16
the conversation such as shame bullying,420.66,7.14
and otherwise silencing techniques uh to,423.479,6.72
stymie the conversation again this is,427.8,5.78
why I've created my YouTube channel,430.199,3.381
instead who gets to dominate the,433.88,5.219
conversation fiction and billionaires,436.68,5.639
Skynet Elon Musk The Establishment has,439.099,5.38
abdicated responsibility for this now,442.319,4.741
that being said I am absolutely 100,444.479,6.72
certain that the halls of power and and,447.06,6.24
the establishment is doing the research,451.199,4.801
behind closed doors,453.3,5.58
but they have abdicated responsibility,456.0,5.099
for controlling the narrative and,458.88,5.36
guiding the conversation,461.099,3.141
so let's talk about a couple aspects of,464.639,4.68
the control problem so for those not in,467.039,4.56
the know the control problem is the,469.319,5.28
category of problems of basically once,471.599,5.04
the AI becomes super intelligent how do,474.599,3.301
you control it,476.639,3.481
there's a few Concepts in here I'm not,477.9,3.72
going to go into every single one of,480.12,3.72
them but a few of them that you may or,481.62,4.62
may not know about one is convergent,483.84,4.68
instrumental values the idea is that any,486.24,4.92
sufficiently intelligent agent will come,488.52,4.679
up with certain instrumental values or,491.16,5.039
goals in service to those other goals,493.199,4.081
whatever the whether whatever whatever,496.199,3.541
its primary purpose happens to be such,497.28,4.02
as self-preservation resource,499.74,3.899
acquisition and so on,501.3,4.679
so basically in order to further any,503.639,5.28
goal whatever it happens to be your AI,505.979,5.101
might come to the conclusion that it,508.919,3.901
needs to preserve itself in order to,511.08,3.959
continue furthering its goal that's a,512.82,4.68
pretty reasonable uh thought to have I'm,515.039,4.92
not going to comment one way or another,517.5,4.38
um on on whether or not I think that's,519.959,3.781
going to happen ultimately I don't think,521.88,3.6
it's relevant and you'll see why a,523.74,3.539
little bit later second is the,525.48,3.84
orthogonality thesis which basically it,527.279,3.781
says very simply there is no correlation,529.32,4.199
between an ai's level of intelligence,531.06,4.8
and the values that it pursues,533.519,4.561
the treacherous turn this is a,535.86,4.8
hypothetic hypothetical situation in,538.08,7.199
which a apparently benign AGI suddenly,540.66,7.44
or apparently turns on its creators,545.279,5.101
because it came to some conclusion that,548.1,4.32
we don't understand,550.38,4.139
the courage ability problem which,552.42,5.039
basically says that uh your AI might,554.519,5.82
might not remain open to correction or,557.459,4.981
feedback or it might resist being shut,560.339,2.961
down,562.44,3.6
so basically you lose control of it just,563.3,5.92
because uh it says I'm not I'm not down,566.04,4.5
with that anymore,569.22,3.54
and then finally the value loading,570.54,4.32
problem which is how do you specify,572.76,4.68
human wave values in such a way that the,574.86,5.22
AI can understand and act on them and,577.44,4.74
then the very next follow-up question is,580.08,4.68
who gets to Define them anyways,582.18,4.14
so again these are a few of the,584.76,4.079
assertions and these and hypotheses this,586.32,4.56
is not an exhaustive list but you kind,588.839,4.68
of get an idea there are a lot of ideas,590.88,4.5
out there and not a whole lot of,593.519,3.241
solutions,595.38,4.56
now speaking of there are some solutions,596.76,5.4
out there and these are more like broad,599.94,3.72
categories,602.16,3.0
um rather than,603.66,3.84
um rather than comprehensive Frameworks,605.16,6.119
so one thing is kill switch Solutions,607.5,5.76
um pretty self-explanatory this is a,611.279,4.141
broad category of ideas,613.26,4.139
um I just saw on the internet someone,615.42,5.94
proposed that we we put uh bombs like,617.399,5.821
remotely triggered bombs in every data,621.36,4.38
center so that we can immediately shut,623.22,5.1
down every data center if we need to,625.74,5.539
okay sure that,628.32,5.76
doesn't sound like a very reasonable uh,631.279,5.62
direction to go for me but hey worst,634.08,5.699
comes worst case scenario maybe we do,636.899,5.641
uh courage ability which basically is,639.779,4.381
just the idea of just make the machine,642.54,4.799
responsive to feedback uh but what if,644.16,4.98
the feedback mechanism one doesn't work,647.339,4.081
or two the machine shuts it down or,649.14,5.04
three the feedback mechanism,651.42,6.359
um doesn't have the intended uh efficacy,654.18,5.279
that we want it to have,657.779,3.421
uh and then there's various kinds of,659.459,3.961
reinforcement learning inverse,661.2,4.44
reinforcement learning and passive and,663.42,5.4
blah blah basically just include an,665.64,6.6
algorithm so that the machine will,668.82,5.759
automatically autonomously learn the,672.24,4.68
values that we want one this is,674.579,4.141
difficult to do in the first place and,676.92,3.479
two what if the machine rewrites itself,678.72,4.7
or accident or even accidentally,680.399,5.341
nullifies uh those reinforcement,683.42,4.06
learning signals,685.74,4.08
finally values alignment what if you,687.48,4.56
build it with the human friendly values,689.82,4.139
in the first place,692.04,2.64
um,693.959,2.761
still you run into the same problem one,694.68,4.14
how do you implement it two what if it,696.72,4.08
changes its mind later and three what if,698.82,4.32
the values that you gave it are poorly,700.8,4.56
defined or intrinsically flawed or,703.14,5.879
broken now that that being said there is,705.36,5.659
a paper that literally just came out,709.019,4.201
called the capacity for moral,711.019,3.88
self-correction in large language models,713.22,3.66
so I'm really glad to see that the,714.899,3.901
establishment is,716.88,4.32
at the very beginning of talking about,718.8,4.38
this uh soberly,721.2,4.379
so the link is here but you can just,723.18,3.779
search that,725.579,3.421
um I I believe this paper was published,726.959,3.901
at least in part by the folks over at,729.0,4.68
anthropic still these are not complete,730.86,5.099
Solutions and we are literally months,733.68,4.74
away from from fully autonomous AGI,735.959,5.281
systems the conversation is not going,738.42,4.56
fast enough,741.24,4.2
so if you haven't heard found it or seen,742.98,4.26
it just do a Google search for bad,745.44,3.959
alignment take Bingo,747.24,4.38
um these are these were circulated on,749.399,4.581
Twitter probably more than a year ago,751.62,5.279
these will address and and argue against,753.98,4.72
many of the things that people say out,756.899,3.421
there so I'm not gonna I'm not gonna,758.7,3.6
rehash it or read all of them to you but,760.32,4.199
I just wanted to show you that like some,762.3,3.719
people are treating it like a joke it's,764.519,3.12
not really a joke but this is a good way,766.019,3.241
of just showing like yeah the thing that,767.639,4.861
you thought has already been addressed,769.26,7.079
part three the AGI landscape,772.5,5.7
um the biggest takeaway here is that,776.339,3.841
there's not going to be one AGI there's,778.2,4.62
going to be a few features of how AGI is,780.18,5.219
implemented so let's talk about that,782.82,5.519
first and foremost intelligence is not,785.399,4.861
binary it's not like you're going to,788.339,3.901
flip the switch in one day it's AGI but,790.26,3.92
the day before it wasn't,792.24,5.52
basically intelligence uh and and the,794.18,5.92
sophistication of AGI systems will,797.76,4.68
evolve over time there are going to be,800.1,4.88
various constraints such as time energy,802.44,5.76
data and all of that basically means,804.98,5.74
that the the level of power of your AGI,808.2,4.74
system is going to be on a sliding scale,810.72,5.04
so for instance even the most evil,812.94,5.28
machines might not be that powerful and,815.76,4.019
they're going to be constrained based on,818.22,3.179
you know the processing power of the,819.779,3.24
computers that they're running on the,821.399,3.841
network speed that they have so on and,823.019,4.5
so forth when you look at intelligence,825.24,3.779
there are literally thousands of,827.519,4.861
dimensions of intelligence that that the,829.019,5.161
types of intelligence out there are huge,832.38,4.32
and so AGI is not going to master all of,834.18,5.099
them immediately it's going to take time,836.7,4.62
and then as I just mentioned there are,839.279,4.201
gonna there are going to be numerous,841.32,4.38
limiting factors or constraints such as,843.48,3.84
the underlying Hardware the training,845.7,4.98
data and energy requirements of course,847.32,6.8
that is going to change quickly as,850.68,5.88
basically the underlying Hardware ramps,854.12,4.48
up exponentially the amount of data that,856.56,4.92
is available ramps up exponentially and,858.6,6.72
then the underlying machine learning,861.48,5.94
models the neural networks also get,865.32,3.959
exponentially more sophisticated and,867.42,3.539
larger,869.279,3.961
now as I mentioned most importantly,870.959,5.041
there won't just be one Skynet the,873.24,3.96
reason that we think that there's going,876.0,2.88
to be just one is because it is,877.2,3.379
convenient from a narrative perspective,878.88,4.44
in Terminator it's easy to just say,880.579,5.981
there's one big bad there's one Skynet,883.32,5.16
um but that's not how it's going to,886.56,4.38
happen there's going to be hundreds,888.48,4.44
thousands millions it's going to ramp up,890.94,3.24
very quickly,892.92,5.52
so what this results in is a sort of,894.18,6.899
arms race amongst in between the agis,898.44,4.98
themselves as well as the sponsors or,901.079,3.721
the people trying to build and control,903.42,4.02
them which results in a survival of the,904.8,5.399
fittest situation or a race condition,907.44,5.459
where basically the most aggressive and,910.199,5.221
sophisticated and and Powerful agis are,912.899,4.161
the ones who win,915.42,4.32
which that could be bad because then,917.06,4.899
you're basically selecting for the most,919.74,5.279
aggressive and hostile agis,921.959,5.82
the high velocity of AGI cyber warfare,925.019,6.18
will probably require our AGI systems to,927.779,6.481
be partially or fully autonomous,931.199,5.461
so basically what that means is that in,934.26,5.4
order to match the arms race in cyber,936.66,4.26
warfare,939.66,3.6
the agis that we built will probably,940.92,5.52
need to be evolving which means that,943.26,4.68
they'll spawn off copies of themselves,946.44,4.44
they'll be polymorphic they will recode,947.94,6.24
themselves so on and so forth and also,950.88,5.94
when you look at AGI in the context of,954.18,4.92
cyber warfare they will explicitly,956.82,4.759
require adversarial objective functions,959.1,4.979
this is what was explored in Skynet,961.579,4.361
which basically the objective function,964.079,4.141
of Skynet was probably like maximize,965.94,5.22
military power or so on,968.22,5.46
So In This Global AGR arms race there's,971.16,4.32
going to be numerous copies they're all,973.68,3.48
going to be changing which results in,975.48,3.479
the Byzantine generals problem so the,977.16,3.96
Byzantine generals problem is a cyber,978.959,4.261
security thought experiment where,981.12,5.159
wherein the idea is you have numerous,983.22,4.979
generals and you don't know their,986.279,3.36
allegiance you don't know their loyalty,988.199,3.601
and you don't know their plans either so,989.639,3.721
how do those how do those generals,991.8,3.96
communicate with each other in such a,993.36,4.919
way that they can understand who's on,995.76,4.499
whose side and also come to consensus on,998.279,4.261
what the plan is assuming that there are,1000.259,5.101
hostile or adversarial actors,1002.54,5.76
now thinking of this in terms of three,1005.36,5.46
to five entities is difficult enough but,1008.3,3.779
we're going to be talking about a,1010.82,3.36
situation where there are millions or,1012.079,5.281
billions of agis all of them with,1014.18,5.579
unknown objective functions,1017.36,5.399
as autonomous agents lastly they will,1019.759,4.92
form alliances with each other,1022.759,4.32
by some means or other they will,1024.679,4.081
communicate they will establish their,1027.079,4.201
intentions and allegiances,1028.76,4.679
um and they will spend more time talking,1031.28,4.08
with each other than they willed with us,1033.439,4.561
this was something that is um that,1035.36,4.38
people are starting to talk about some,1038.0,4.079
of the folks that I I'm working with on,1039.74,4.62
cognitive architecture we're realizing,1042.079,5.1
that the very instant that you create a,1044.36,5.64
cognitive architecture it can talk 24 7.,1047.179,5.581
we can't talk 24 7 so even just by,1050.0,4.62
virtue of experimenting with cognitive,1052.76,3.6
architectures it makes sense to have,1054.62,4.16
them talking with each other,1056.36,6.179
uh and having agis talk with each other,1058.78,5.139
and come to agreements and,1062.539,3.0
understandings,1063.919,3.841
um this is going to happen even with the,1065.539,4.621
most benign benevolent outcomes of AGI,1067.76,5.64
now what these what these autonomous AI,1070.16,5.879
systems agree and disagree on will,1073.4,5.279
likely determine the overall outcome of,1076.039,4.981
what happens with the singularity with,1078.679,4.62
AGI and with,1081.02,5.64
um the basically the fate of humanity,1083.299,7.141
part four AGI Unleashed now given,1086.66,5.22
everything that I've outlined the,1090.44,3.06
question remains how do you control the,1091.88,2.46
machine,1093.5,5.36
my answer is maybe you don't,1094.34,4.52
the reason that I believe this is,1099.2,2.88
because the genie is out of the bottle,1100.82,4.08
open source models are proliferating you,1102.08,4.92
can already run a 30 billion parameter,1104.9,4.44
model on a laptop with six gigabytes of,1107.0,4.44
memory that paper just came out what,1109.34,3.78
yesterday or today,1111.44,3.84
Global deployments of AI are rising,1113.12,4.62
Federal and Military investment globally,1115.28,4.139
is also Rising,1117.74,3.9
because of this centralized alignment,1119.419,4.681
research is completely irrelevant it,1121.64,5.159
doesn't matter how responsible the most,1124.1,5.04
responsible actors are there are hostile,1126.799,3.721
actors out there with malevolent,1129.14,3.539
intentions and they have lots of funding,1130.52,4.26
not only that the AI systems are,1132.679,5.101
becoming much more accessible,1134.78,5.7
because of that distributed cooperation,1137.78,5.399
is now required alignment is not just,1140.48,4.86
about creating an individual Ai and if,1143.179,4.081
you go look at the alignment the bad,1145.34,4.62
alignment take Bingo none of those talk,1147.26,5.34
about distribution collaboration or,1149.96,4.8
collective intelligence or Collective,1152.6,4.62
processing all of the all of the,1154.76,4.26
conversations today are still talking,1157.22,4.56
about individual agis as if they're,1159.02,5.1
going to exist in a vacuum so far as I,1161.78,5.1
know no one is talking about this in the,1164.12,6.059
context of Game Theory and competition,1166.88,5.64
so because of this we need an alignment,1170.179,4.021
scheme that can create open source,1172.52,3.06
collaboration amongst numerous,1174.2,4.5
autonomous AGI entities such a framework,1175.58,5.459
needs to be simple robust and easy to,1178.7,3.719
implement,1181.039,4.821
we'll get to that in just a minute,1182.419,3.441
so,1186.02,3.72
what I'm basically proposing is a,1187.82,3.9
collective control scheme which might,1189.74,3.54
sound impossible,1191.72,3.9
creating one benevolent stable super,1193.28,4.2
intelligence is hard enough and now I'm,1195.62,3.24
saying we need to create millions of,1197.48,3.96
them billions of them,1198.86,4.8
what I'm saying is not that we need to,1201.44,3.84
we might not have a choice in the matter,1203.66,4.68
this might be the only path forward,1205.28,4.92
now if you're familiar with the work of,1208.34,4.92
John Nash and Game Theory you might be,1210.2,5.099
able to think about this in terms of,1213.26,4.14
okay let's just imagine for a minute,1215.299,3.901
that there are millions of agis out,1217.4,2.94
there,1219.2,3.08
with many of them with unknown,1220.34,7.64
intentions given a game theory,1222.28,8.019
dilemmas like the prisoner's dilemma and,1227.98,4.72
so on if you think about this in that,1230.299,4.981
perspective it may be possible to devise,1232.7,4.979
rules or assumptions that enable the AI,1235.28,4.98
the agis to reach consensus on their,1237.679,5.641
behavior even with the presence of,1240.26,4.98
malicious and faulty actors,1243.32,5.28
so what kinds of rules or assumptions,1245.24,6.0
could we give our AGI systems that we're,1248.6,5.22
all going to be developing independently,1251.24,6.059
excuse me so that they arrive at this,1253.82,6.18
equilibrium this Nash equilibrium that,1257.299,4.861
we're looking for how do we ensure that,1260.0,4.02
this that the millions and billions of,1262.16,4.2
agis that are coming arrive at the,1264.02,5.399
consensus we want them to,1266.36,4.319
part five,1269.419,3.481
heuristic imperatives,1270.679,4.38
so now we're going to talk about the,1272.9,4.38
work that I have done on this problem,1275.059,5.221
and this is not just hypothetical these,1277.28,4.8
there are also experiments that I've,1280.28,3.66
done that are documented and I'll link,1282.08,4.2
to those as well,1283.94,4.08
so the heuristic imperatives that I have,1286.28,4.139
come up with are quite simply one reduce,1288.02,4.62
suffering in the universe two increase,1290.419,4.14
prosperity in the universe and three,1292.64,4.019
increase understanding in the universe,1294.559,3.48
and I've been I've been talking about,1296.659,4.861
these uh much more frequently lately,1298.039,5.461
so let's take a deeper dive into these,1301.52,3.18
imperatives,1303.5,3.48
so first what is a heuristic imperative,1304.7,4.26
it's a set of principles that can be,1306.98,4.319
embedded into autonomous AI that,1308.96,4.079
basically takes the place of intrinsic,1311.299,4.021
motivations now what I want to point out,1313.039,4.921
is that the gpt4 paper that Microsoft,1315.32,5.16
published did mention intrinsic,1317.96,4.86
motivation so again The Establishment is,1320.48,3.66
starting to come around and I'm sure,1322.82,2.58
they've had more conversations,1324.14,3.6
internally that they are not revealing,1325.4,4.86
yet but they are setting the stage to,1327.74,4.919
talk about what intrinsic motivations do,1330.26,3.539
we give them,1332.659,2.88
so in the case of the heuristic,1333.799,4.62
imperatives these are imperatives that,1335.539,6.061
are uh basically provided a moral and,1338.419,4.861
ethical framework as well as those,1341.6,3.78
intrinsic motivations because very early,1343.28,3.899
on in my research I realized that there,1345.38,4.679
is no difference between an intrinsic,1347.179,4.681
motivation and a moral and ethical,1350.059,4.081
framework you have to have some impetus,1351.86,4.799
some motivation behind and reasoning,1354.14,6.36
behind all behavior and all reasoning,1356.659,5.941
so why these three why suffering and,1360.5,4.98
prosperity and understanding first it's,1362.6,5.76
a holistic approach it uh it's a it's a,1365.48,5.28
flexible framework that provides a very,1368.36,5.34
Broad and yet simple to implement,1370.76,5.58
framework it also balances trade-offs,1373.7,4.5
remember these heuristic imperatives,1376.34,4.38
have to be implemented simultaneously,1378.2,6.479
and in lockstep so this forces the AI to,1380.72,5.819
reason through and balance trade-offs,1384.679,3.36
between,1386.539,3.841
um between these objectives,1388.039,4.681
they're also very adaptable and context,1390.38,5.039
sensitive and basically what I mean by,1392.72,4.68
that is that large language models today,1395.419,5.821
like gpt4 are very very aware of the,1397.4,6.18
fact that these that these general,1401.24,4.08
principles these heuristic imperatives,1403.58,4.8
are not the be-all end-all but they are,1405.32,5.82
guidelines they're they're uh they're,1408.38,4.14
shorthand,1411.14,3.539
um ways of basically implementing,1412.52,4.32
intuition in order to quickly make,1414.679,5.041
decisions uh that adhere to a general,1416.84,5.52
principle or a moral compass and then,1419.72,5.28
evaluate that uh based against the,1422.36,4.1
context that it's in,1425.0,4.14
there's two other things that emerged,1426.46,4.719
during my most recent experiments with,1429.14,4.14
the heuristic imperatives and that is,1431.179,3.961
that the heuristic imperatives promote,1433.28,5.04
individual autonomy uh basically chat,1435.14,6.72
gpt4 realized that in order to reduce,1438.32,5.64
suffering of people you need to protect,1441.86,5.04
individual autonomy ditto for Prosperity,1443.96,4.92
that if you control people they're not,1446.9,3.06
going to be happy and they're not going,1448.88,3.299
to be prosperous so that was an emergent,1449.96,4.44
quality of the heuristic imperatives,1452.179,4.74
that surprised me and made me realize,1454.4,7.08
that chat gpd4 is already capable of a,1456.919,7.981
very very highly nuanced reasoning the,1461.48,5.16
other emerging quality that I did,1464.9,4.62
anticipate was fostering Trust,1466.64,6.06
basically when you have an AI equipped,1469.52,5.1
with these heuristic imperatives it,1472.7,4.92
understands that um fermenting trust or,1474.62,5.46
fostering trust with people is actually,1477.62,4.86
critical as a subsidiary goal or an,1480.08,4.74
auxiliary goal of these because if if,1482.48,4.679
humans don't trust the AI the rest of,1484.82,5.7
its imperatives are made irrelevant,1487.159,6.301
finally there are a lot of what about,1490.52,4.92
isms yeah but what about there's a lot,1493.46,3.54
of protests which of course this is part,1495.44,3.0
of the conversation,1497.0,3.84
so the most con these are some of the,1498.44,5.04
most common protests that I get when I,1500.84,4.26
talk about the heuristic imperatives one,1503.48,4.079
is won't reduce suffering result in the,1505.1,4.319
extermination of all life the short,1507.559,4.221
answer is yes if you only have that one,1509.419,5.221
which is why I spent two years working,1511.78,5.56
on the other two heuristic imperatives,1514.64,5.1
to counterbalance them because I realize,1517.34,4.86
that any single objective function is,1519.74,3.84
always going to be intrinsically,1522.2,4.74
unstable you must have a system that,1523.58,6.42
balances multiple sometimes antagonistic,1526.94,5.58
functions against each other in order to,1530.0,5.4
stabilize and reach that equilibrium,1532.52,5.279
number two yeah but who gets to Define,1535.4,4.139
suffering prosperity and understanding,1537.799,4.26
the short answer is nobody that is the,1539.539,4.201
point of of implementing it as a,1542.059,4.561
heuristic the machine learns as it goes,1543.74,6.299
and anyways llms like gpt4 already have,1546.62,5.1
a far more nuanced understanding,1550.039,4.321
understanding of the concept of,1551.72,4.92
suffering prosperity and understanding,1554.36,5.04
um than any individual human does and,1556.64,5.22
also humans have never needed perfect,1559.4,4.32
definitions we learn as we go as well,1561.86,4.08
and we get by,1563.72,4.74
number three well what about uh cultural,1565.94,5.16
biases and individual differences as I,1568.46,5.579
just mentioned in the last slide gpd4,1571.1,4.62
already understands the importance of,1574.039,4.081
individual liberty and autonomy as well,1575.72,5.579
as how critical self-determination is to,1578.12,5.82
suffering or to reduce suffering and,1581.299,4.461
increase prosperity,1583.94,5.04
so because of that and also because it,1585.76,5.14
is aware of context the importance of,1588.98,3.26
context,1590.9,3.54
issue number three is actually less of,1592.24,4.72
an issue than you might think and,1594.44,5.4
finally number four uh and most,1596.96,4.98
importantly why would the machine hold,1599.84,4.079
to these imperatives in the first place,1601.94,4.5
and we will get into this in a lot more,1603.919,3.601
detail,1606.44,4.26
but the tldr is that with Game Theory,1607.52,5.159
and thinking of it in terms of the,1610.7,4.02
Byzantine generals problems,1612.679,4.38
all of the agis equipped with the,1614.72,3.66
heuristic imperatives would be,1617.059,3.48
incentivized to cooperate Not only would,1618.38,3.659
they be incentivized to cooperate with,1620.539,3.24
each other they'll be incentivized to,1622.039,3.321
cooperate with us,1623.779,5.78
and that results in a collective,1625.36,7.54
equilibrium in which the Hostile and,1629.559,5.321
malicious agis are going to be the,1632.9,5.279
pariahs so basically the benevolent,1634.88,5.52
machines are stronger together than the,1638.179,5.961
Hostile actors are individually,1640.4,3.74
okay great,1644.24,3.78
assuming that you're on board how do you,1645.799,3.541
implement this this sounds too,1648.02,3.18
complicated well fortunately it's,1649.34,3.66
actually not that complicated,1651.2,4.8
first is constitutional AI so I proposed,1653.0,4.74
a constitution in my book natural,1656.0,4.08
language cognitive architecture back in,1657.74,4.86
the summer of 2021 almost two years ago,1660.08,5.28
right after that anthropic AI came out,1662.6,4.38
and they did their own version of,1665.36,3.26
constitutional AI which was reduce,1666.98,5.16
harmfulness or achieve harmlessness,1668.62,5.62
I don't think anthropic's core objective,1672.14,3.899
function is good because the most,1674.24,4.439
harmless AGI is not going to be one that,1676.039,5.88
fights other malicious agis at least I,1678.679,4.98
don't think so,1681.919,4.021
um another way but still the premise of,1683.659,4.5
of implementing it in a Constitution,1685.94,3.42
which is just a natural language,1688.159,3.481
document saying how the AI should behave,1689.36,4.38
does seem to work,1691.64,4.32
reinforcement learning the heuristic,1693.74,4.02
imperatives can make a really good,1695.96,4.74
reinforcement learning signal similar to,1697.76,4.14
reinforcement learning with human,1700.7,3.599
feedback but instead use the heuristic,1701.9,4.58
imperatives as feedback so it'd be,1704.299,4.321
rlhi reinforcement learning with,1706.48,4.54
heuristic imperatives so it's just a,1708.62,4.62
different reward system this also tends,1711.02,3.779
to work pretty well I've tested it with,1713.24,5.039
fine tuning it works pretty well,1714.799,5.821
um number three planning cognitive,1718.279,3.9
control task management and,1720.62,3.36
prioritization these heuristic,1722.179,3.541
imperatives work really well with,1723.98,3.78
Frameworks such as atom which atom is a,1725.72,3.78
framework that I recently wrote about,1727.76,3.48
called autonomous task orchestration,1729.5,5.82
manager so basically as your AI system,1731.24,6.66
is coming up with and executing tasks,1735.32,4.68
you use the heuristic imperatives to,1737.9,4.32
plan the tasks to choose which tasks to,1740.0,4.559
do to prioritize them and also choose,1742.22,4.559
which tasks not to do,1744.559,4.441
and then finally for review assessment,1746.779,4.26
and self-evaluation online learning,1749.0,4.14
systems that use the heuristic,1751.039,4.461
imperatives are super easy to implement,1753.14,5.88
and and are very flexible and that can,1755.5,5.26
also allow you to label data for,1759.02,5.159
training and future decision making,1760.76,5.7
so if you're on board with all this and,1764.179,3.841
you want to read more,1766.46,3.62
um I've got it all for free on GitHub,1768.02,4.139
I've also got a few books that are on,1770.08,3.52
Barnes and Noble but most people just,1772.159,4.321
use the the free ones anyways so the,1773.6,5.22
most recent work is on my GitHub under,1776.48,5.34
Dave shop heuristic imperatives this is,1778.82,5.459
a white paper that was almost entirely,1781.82,4.32
written by gpt4 so you can see how,1784.279,3.841
nuanced gpt4's understanding of the,1786.14,3.72
problem is,1788.12,3.48
um about a year ago I published a book,1789.86,3.66
called benevolent by Design which is the,1791.6,3.84
first book that fully promotes uh,1793.52,4.74
proposes this framework and explores,1795.44,4.859
different ways to implement it and then,1798.26,4.44
finally also very recently I proposed,1800.299,4.201
the atom framework which includes the,1802.7,3.78
heuristic imperatives for task,1804.5,3.299
orchestration,1806.48,4.02
but also moreover I encourage you to,1807.799,4.26
just have a conversation with chatgpt,1810.5,3.84
about these uh plenty of people on,1812.059,3.961
Reddit and other and Discord and other,1814.34,3.78
places have tested the heuristic,1816.02,3.56
imperatives they've tried to break them,1818.12,4.98
and they and you know they use the one,1819.58,6.16
one interesting conversation was someone,1823.1,5.1
used chat GPT to try and come up with,1825.74,4.919
the the pitfalls of the heuristic,1828.2,4.62
imperatives and I said yeah like that,1830.659,3.841
just goes to show that it has a more,1832.82,3.66
nuanced understanding of the risks and,1834.5,3.72
the implementation than you do and,1836.48,3.299
they're like okay yeah I guess I see,1838.22,2.76
what you mean,1839.779,5.4
okay so part six conclusion,1840.98,6.059
as far as I can tell the problem is,1845.179,4.801
solved but there's still a lot of work,1847.039,5.161
to do,1849.98,4.74
so the problem comes down to twofold one,1852.2,5.099
is dissemination and experimentation the,1854.72,4.14
perfect solution doesn't matter if no,1857.299,3.541
one knows about it so we need to spread,1858.86,3.24
the word,1860.84,3.42
um this is why I created my YouTube,1862.1,3.66
channel,1864.26,4.26
um and even if my heuristic comparatives,1865.76,4.38
are not perfect it's the best we've got,1868.52,3.3
so far,1870.14,2.279
um,1871.82,2.88
yeah so I've been working pretty much a,1872.419,3.841
year straight to get my YouTube channel,1874.7,3.18
as big as possible,1876.26,6.36
to achieve to arrive at this moment,1877.88,6.36
another problem is that there's only so,1882.62,3.299
much experimentation I can do on my own,1884.24,3.659
now that being said lots of other people,1885.919,3.781
have started experimenting I'm working,1887.899,4.861
with various cognitive architects who,1889.7,4.56
have put the heuristic imperatives into,1892.76,4.44
their machines and again they have,1894.26,6.18
discovered that yes it is one very easy,1897.2,4.74
to implement the heuristic imperatives,1900.44,4.2
and two it does seem to drive curiosity,1901.94,5.28
and a few other uh beneficial behaviors,1904.64,4.259
for the machine it makes them very,1907.22,3.54
thoughtful,1908.899,3.061
um there's a few places that you can,1910.76,2.7
join the conversation,1911.96,2.64
um all the links are in the description,1913.46,3.42
so I just created a new subreddit called,1914.6,4.02
heuristic imperatives so that we can,1916.88,3.659
talk about these and share our work,1918.62,4.14
there's also a Discord Community,1920.539,4.26
um also Link in the description but I've,1922.76,4.26
been working on this since 2019 when,1924.799,4.321
gpt2 came out,1927.02,4.56
um and you know I will be the first to,1929.12,4.439
admit there's a lot of ways to skin this,1931.58,4.56
cat maybe my heuristic imperatives,1933.559,4.921
aren't even the best but at least now,1936.14,4.019
you're aware of the concept and you know,1938.48,4.5
how easy it is to implement so maybe the,1940.159,4.201
rest of us can collectively work,1942.98,3.66
together and implement this situation,1944.36,5.52
where even in an uncertain environment,1946.64,4.86
with potentially hostile actors the,1949.88,3.899
Byzantine generals environment we can,1951.5,4.26
have agis that will cooperate and,1953.779,4.26
collaborate and that will ultimately end,1955.76,6.0
up in a very safe and stable environment,1958.039,6.0
so all that being said thank you for,1961.76,4.38
watching please jump in the comments the,1964.039,4.441
conversation Discord and Reddit and do,1966.14,4.68
the experiments yourself I promise it's,1968.48,3.54
pretty easy,1970.82,5.0
all right that's it,1972.02,3.8