davidshapiro_youtube_transcripts / The Age of Autonomous AI Dozens of Papers and Projects plus my solution to the Alignment Problem_transcript.csv
Stevross's picture
Upload 50 files
421fea8
raw
history blame contribute delete
No virus
34.2 kB
text,start,duration
morning everybody David Shapiro here,1.14,4.02
with another video I wasn't really,3.48,3.419
planning on making this video but I,5.16,3.18
realized that things are accelerating,6.899,4.141
and um there is a sense of urgency,8.34,4.2
um so before we get started I just want,11.04,3.42
to say that today's video is sponsored,12.54,4.02
by all of you,14.46,5.399
um my patreon supporters make the my,16.56,6.12
continuous work possible so if you want,19.859,5.16
to continue to incentivize this Behavior,22.68,5.04
Uh consider jumping over on patreon and,25.019,4.801
if you sign up for the higher tiers um,27.72,4.98
you know I'm willing to chat with you,29.82,6.66
and even jump on Zoom calls once or,32.7,6.24
twice a month at the higher tiers uh,36.48,4.32
just in order to talk about whatever you,38.94,3.9
want to talk about some people ask me,40.8,4.86
about you know what's the current news,42.84,4.62
um some people ask for help with prompt,45.66,4.079
engineering all kinds of stuff I've even,47.46,4.5
had people ask me just about like how do,49.739,5.421
I adapt to this changing landscape,51.96,6.48
obviously I'm not a therapist but I can,55.16,5.02
at least share my perspective on this,58.44,4.2
stuff okay so without further Ado let's,60.18,6.06
jump in if you go to GitHub trending,62.64,6.06
you'll see a couple of very interesting,66.24,5.16
patterns the top four trending,68.7,4.919
repositories right now all have to do,71.4,5.399
with large language models,73.619,4.801
um and then you go down a little bit,76.799,4.261
further and there's even more generative,78.42,4.98
AI so there's a code translator there's,81.06,7.08
Moki diffusion llama so obviously we are,83.4,7.62
in an inflection point and today we're,88.14,4.88
going to talk about amongst other things,91.02,5.7
fully autonomous AI so if you're not,93.02,6.76
aware Auto GPT is all the rage right now,96.72,5.28
everyone is talking about it everyone is,99.78,6.0
using it and adapting it and,102.0,8.28
the the tldr is this is the first,105.78,7.32
production like fully fledged cognitive,110.28,4.32
architecture there's plenty of other,113.1,4.5
people working on very similar stuff,114.6,7.62
um but the Advent of gpt4 uh as well as,117.6,6.9
all the other work that people are doing,122.22,3.6
um basically means that cognitive,124.5,3.84
architecture is here uh fully autonomous,125.82,5.58
AI is here now the question is only what,128.34,4.74
is it capable of what are its,131.4,3.96
limitations and how much does it cost to,133.08,3.299
run,135.36,2.28
um if you I'm not going to do a full,136.379,3.661
demo of this but you just Google it or,137.64,4.86
you know search YouTube for auto GPT you,140.04,3.9
will see that there are demos out there,142.5,3.959
already this can do any number of things,143.94,5.519
so this is why there's a sense of,146.459,4.5
urgency because once you have an,149.459,3.42
autonomous AI,150.959,3.721
um this is this one is semi-autonomous,152.879,4.08
it is gated so that it asks the user for,154.68,4.919
permission but it's only a very small,156.959,6.201
step to go from here to fully autonomous,159.599,6.36
which is why I do my work with the,163.16,4.24
heuristic imperatives and we'll talk,165.959,3.121
about alignment once we get a little bit,167.4,4.08
further into the video because there's,169.08,4.379
quite a few papers out there that talk,171.48,4.02
about alignment and I want to show you,173.459,3.961
that that my work is not quite so,175.5,4.379
eccentric that there are people in The,177.42,4.02
Establishment talking in this direction,179.879,3.781
I just happen to be the first one to,181.44,4.56
propose a comprehensive solution that I,183.66,4.68
can also demonstrate,186.0,5.28
um so yeah Auto GPT is out it's only,188.34,4.8
going to get faster more powerful and,191.28,5.879
better as uh new models come out and as,193.14,6.06
open source models that are distilled,197.159,3.961
and quantized come out and we'll talk,199.2,3.619
about those in just a minute,201.12,5.399
Microsoft is doing Jarvis which Jarvis,202.819,4.78
if you're not familiar with the,206.519,3.541
character was voiced by Paul Bettany in,207.599,4.881
Iron Man and the MCU,210.06,6.0
and this has some other similar uh fully,212.48,4.899
autonomous capabilities that they're,216.06,3.66
working on task planning model selection,217.379,4.561
task execution and response generation,219.72,5.36
again this is a cognitive architecture,221.94,5.7
and the fact that it's being sponsored,225.08,5.98
and run by Microsoft is that tells you,227.64,6.179
the direction that the industry is going,231.06,5.34
um now one thing here is that model,233.819,5.221
selection so what this implies is that,236.4,4.38
depending on the level of sophistication,239.04,4.259
of a task or how difficult it is it's,240.78,3.72
going to be able to choose different,243.299,2.281
models,244.5,3.44
now during a Discord call that I had,245.58,5.04
with the cognitive AI lab Discord which,247.94,3.999
if you want to join link is in the,250.62,3.899
description we were talking about how,251.939,4.26
important it will be to choose models,254.519,3.96
because the lightest weight models are,256.199,4.981
literally thousands of times cheaper and,258.479,5.22
smaller than the largest models and so,261.18,5.22
humans we do this too where we rely on,263.699,5.581
intuition and habit and we only engage,266.4,4.859
our executive function if something is,269.28,4.74
really hard and our first attempts fail,271.259,7.321
excuse me and so if um if cognitive,274.02,6.36
architectures go the same way you're,278.58,3.3
going to be able to run most of it,280.38,3.539
locally and then of course as large,281.88,3.84
language models become more quantized,283.919,4.381
more efficient and as the hardware in,285.72,5.4
our laptops phones and desktops become,288.3,4.98
more powerful eventually before too long,291.12,3.84
we're going to be able to run something,293.28,5.82
equal to gpt4 and better locally so we,294.96,6.9
are we are now entering as of March and,299.1,5.819
April 2023 entering the era of fully,301.86,6.899
autonomous AI which is a much more,304.919,7.201
useful term than AGI because AGI is just,308.759,5.521
an arbitrary thing this is autonomous,312.12,4.62
now the only question is again how smart,314.28,4.919
is it how fast is it what is it capable,316.74,4.92
of and what is it not capable of yet,319.199,4.56
so those are the two big repos that I,321.66,4.259
wanted to point out and they're both the,323.759,4.681
top of trending so that tells you that,325.919,3.72
they are getting the most attention,328.44,2.46
right now so if you want to jump into,329.639,4.441
the conversation Now's the Time,330.9,5.76
um okay so moving right along if you,334.08,4.26
want something that's a little bit more,336.66,4.259
practical and Hands-On one of my patreon,338.34,5.16
supporters told me about joseki which,340.919,5.161
joseki is basically devops but for AI,343.5,4.62
and language models,346.08,4.2
um so it gives you an end-to-end,348.12,3.78
pipeline,350.28,4.259
um to create basically cognitive,351.9,4.68
architectures it includes all kinds of,354.539,5.1
tools and apis and it does take a while,356.58,4.619
to get familiar with if you're not,359.639,4.141
already familiar with it but when you,361.199,3.72
look at the fact that it can,363.78,4.259
automatically generate apis,364.919,6.481
um and you you plug this into the AI and,368.039,5.821
the AI can design itself and redesign,371.4,3.9
its own infrastructure and say hey I,373.86,2.94
need an API that does this let's go,375.3,3.56
design that microservice,376.8,5.339
this kind of platform is probably going,378.86,6.459
to be pretty important for building not,382.139,5.581
just autonomous you know Bots like this,385.319,6.741
but fully fledged uh corporate business,387.72,7.44
platforms and so what I mean by that is,392.06,5.139
okay you might be thinking great like,395.16,4.92
you know you can have Auto GPT which can,397.199,6.241
write Twitter and emails for you but if,400.08,4.619
you're thinking about this from an,403.44,3.36
Enterprise perspective from a devops,404.699,5.94
perspective it can plug into your cyber,406.8,6.239
security Suites and monitor that it can,410.639,4.741
monitor your ticket cues it can talk to,413.039,4.38
your marketing team so one example that,415.38,4.5
I thought of was like okay let's say you,417.419,5.4
set up a marketing brain and then it,419.88,5.819
plugs into your slack or teams and then,422.819,4.801
you have a marketing bot or actually,425.699,3.481
multiple marketing Bots that you can,427.62,3.84
talk to that they're going to go out and,429.18,4.26
do research on the internet you know,431.46,3.98
look at your competitors watch videos,433.44,5.46
generate images Market test stuff and,435.44,5.56
basically your marketing team will just,438.9,5.4
be driving the behavior of the Bots and,441.0,5.22
saying like hey you know hey let's do,444.3,3.66
this and then it'll go and do the tasks,446.22,3.96
and kind of report back and that and,447.96,3.66
this might sound like science fiction,450.18,2.88
but this is actually what's happening,451.62,2.76
right now this is what people are,453.06,4.56
actually working on right now,454.38,4.92
um I am no longer the crazy person,457.62,3.12
shouting into the void saying this is,459.3,2.94
coming because now it has now it has,460.74,2.82
arrived,462.24,2.82
um okay and this one is actually,463.56,2.94
alignment so let me move that down,465.06,3.0
further,466.5,4.139
um but yeah so joseki it's joseki.org,468.06,5.22
check that out this is this is another,470.639,6.721
platform so one thing that uh that these,473.28,6.479
these interoperable platforms offer that,477.36,3.779
perhaps,479.759,4.861
um the auto GPT and Jarvis don't is it's,481.139,6.541
a paradigm of okay let's let's think of,484.62,6.419
Jarvis and auto GPT as self-contained,487.68,6.66
agents that have you know extensibility,491.039,6.181
and have their own tools whereas a,494.34,4.919
platform like jaseki says let's embed,497.22,4.5
this in an organization and it'll be,499.259,5.34
part of a pipeline or or a broader,501.72,5.52
ecosystem so it's basically is it,504.599,5.641
centralized or decentralized and both,507.24,6.539
are coming mark my words both kinds of,510.24,6.12
autonomous AI is coming I'm working on,513.779,4.62
uh one another one of my patreon,516.36,4.02
supporters reached out to me with an,518.399,4.38
idea of kind of a hive mind how do you,520.38,5.16
organize an arbitrary number of bots,522.779,5.941
that have different programs well you,525.54,4.5
create an API and you create a,528.72,3.36
discussion space for the for those spots,530.04,4.919
so we're working on hammering that out,532.08,5.52
um and yeah like this is this is It's,534.959,5.82
we're entering into a wild time,537.6,5.94
okay so I've talked about efficiency and,540.779,4.201
some of the other,543.54,3.0
um things that are coming such as,544.98,2.88
quantization,546.54,2.64
um and and we're going to start talking,547.86,2.82
about those now,549.18,6.18
so some of you have seen this post where,550.68,8.46
um basically window size is the is the,555.36,8.039
biggest limitation uh right now but what,559.14,5.4
if we come up with a different,563.399,3.721
architecture like an RNN or you know,564.54,5.1
lstn or you know bring back some some,567.12,5.64
other kinds of architectures that allow,569.64,5.04
you to have a section essentially an,572.76,5.519
unlimited window an infinite window,574.68,6.42
um so that's one thing that's coming we,578.279,4.321
don't know you don't need to see those,581.1,2.52
ads,582.6,2.82
um so that's that's one idea that's,583.62,5.52
coming uh we'll see if it pans out I,585.42,5.34
suspect that you're gonna get um,589.14,3.36
diminishing returns with the more that,590.76,4.5
it reads because other uh models like,592.5,4.92
Google's Universal sentence encoder that,595.26,3.9
can read an infinite amount already but,597.42,3.9
you get what's called dilution where the,599.16,5.52
or semantic dilution where the longer,601.32,5.639
excuse me uh I have allergies I,604.68,4.32
apologize where the longer the text that,606.959,4.021
it reads the more generic the more,609.0,4.08
dilute the vector the embedding becomes,610.98,4.919
so like if you read an infinitely long,613.08,5.46
any any like you know arbitrarily long,615.899,5.221
text the the embedding the vector is,618.54,4.979
going to Trend towards kind of a,621.12,4.56
meaningless Middle Ground,623.519,3.601
um they might come up with ways around,625.68,2.58
that,627.12,3.96
um but basically you're compressing an,628.26,5.1
arbitrary amount of text into,631.08,5.4
a fixed width Vector so you're going to,633.36,5.159
lose some information,636.48,5.58
um at least until the math changes uh,638.519,5.581
the way that it's represented now that,642.06,5.64
being said You Know da Vinci had a 12,644.1,7.08
000 Dimension embedding I'm sure gpt4,647.7,6.0
has a has a much larger one you know,651.18,5.7
these are not very large matrices we,653.7,6.96
could go up to very very large matrices,656.88,6.06
um like that that space is is still,660.66,4.44
being explored because like okay 12 000,662.94,4.2
Dimensions what if you what if you know,665.1,3.78
in a year or two we have 12 million,667.14,3.72
Dimension embeddings,668.88,4.32
um that's a lot more information and a,670.86,4.74
lot more Nuance that you can record okay,673.2,4.86
so I mentioned quantization,675.6,5.72
so the Llama C plus C plus plus,678.06,5.94
these things are getting down to like,681.32,5.44
crazy small right a 30 billion uh,684.0,4.68
parameter model only needs six gig of,686.76,5.46
RAM right okay that is that can run on,688.68,5.58
commodity Hardware,692.22,5.16
um so all the little Nifty tricks and,694.26,4.68
stuff that people are finding whether,697.38,3.6
it's distillation quantization and and,698.94,4.44
so on running with low Precision you,700.98,4.14
know end eight instead of uh floating,703.38,4.62
Point 32 all kinds of stuff is being,705.12,5.459
discovered uh and so what one of the,708.0,5.16
trends that we're seeing is that when,710.579,4.861
you look at the fact that auto GPT and,713.16,4.5
Jarvis will have model selection,715.44,3.899
probably what's going to happen is,717.66,3.54
you're going to have dedicated models,719.339,4.261
that are that are cognitive units that,721.2,4.44
are good at working on specific kinds of,723.6,4.859
tasks right so when you break it down,725.64,6.48
into several cognitive behaviors such as,728.459,4.801
in this case,732.12,3.6
um task planning model selection and,733.26,5.28
task execution you can have smaller,735.72,5.34
models that are purpose built for those,738.54,4.859
particular things and this actually goes,741.06,4.74
to my work on the heuristic imperatives,743.399,5.18
which was an attempt to,745.8,5.34
fine-tune and distill that function so,748.579,5.2
that you can have a moral module a moral,751.14,4.92
framework that will just give you a,753.779,4.68
really quick response of okay this is,756.06,4.019
how you reduce the suffering in this,758.459,3.301
situation this is how you increase,760.079,3.301
prosperity and increase understanding in,761.76,3.36
this situation and then you can also use,763.38,4.68
that same model to self-evaluate in past,765.12,4.92
behaviors which can then be used for,768.06,5.04
reinforcement learning in the future,770.04,4.32
um and then that model can improve,773.1,3.72
itself through uh self-labeling data,774.36,4.02
which we will get to because there are,776.82,3.079
papers out there for that topic now,778.38,4.019
anyways point being is I just wanted to,779.899,4.361
share all of that,782.399,3.301
um another interesting thing that popped,784.26,3.0
up on my feed,785.7,3.66
um drug Discovery is accelerating,787.26,5.1
because of this uh all of this,789.36,5.159
generative stuff this goes back to,792.36,5.82
um uh uh Alpha fold and all the,794.519,6.301
downstream Technologies,798.18,6.36
um so we are rapidly approaching,800.82,5.639
um kind of The Snowball Effect and,804.54,5.58
actually Stanford had a um a paper that,806.459,5.94
was just published let me show you on I,810.12,5.519
posted it here on my community so the,812.399,5.101
Stanford paper,815.639,3.781
page not found,817.5,3.72
well darn,819.42,4.859
okay anyways it's uh the Stanford AI,821.22,4.32
index,824.279,3.721
um I guess the link broke uh or they,825.54,4.26
took it down or something but anyways,828.0,4.62
they point out that um AI is actually,829.8,4.56
one of the biggest contributors to,832.62,4.62
science as of 2022 so we're at a Tipping,834.36,7.2
Point where AI is already taking over a,837.24,6.06
tremendous amount of the cognitive load,841.56,5.04
of research and it's accelerating so in,843.3,4.74
my previous videos where I talked about,846.6,3.06
the singularity and stuff and I talked,848.04,4.62
about Job displacement and um basically,849.66,5.64
unlimited cognitive labor we are already,852.66,5.82
seeing the removal of the of the human,855.3,5.58
brain's limitations in terms of,858.48,4.799
advancing science,860.88,5.1
um okay so then that's great that's all,863.279,6.481
data and text so what happens when these,865.98,6.72
models uh get into the real world so,869.76,5.639
maybe you missed this but Facebook is,872.7,4.92
working on robots,875.399,4.44
um and these are robots that can watch,877.62,3.959
and observe humans and then copy their,879.839,5.101
behavior uh yeah so that's coming and,881.579,4.801
then I don't know if you also saw it but,884.94,2.94
Tesla had a demonstration of their,886.38,3.12
Optimus Prime model or whatever they,887.88,3.12
called it I think it's just the Optimus,889.5,3.779
bot but it was able to do some pretty,891.0,5.16
good manual dexterity stuff,893.279,6.3
um yeah so fully autonomous robots are,896.16,5.82
also coming hot on the heels of fully,899.579,5.461
autonomous agents so this is all coming,901.98,5.76
it's much much closer,905.04,3.419
um,907.74,2.52
one thing that was kind of funny is of,908.459,5.161
course it was Italy Italy band chat GPT,910.26,4.68
um they didn't fully ban it they gave,913.62,4.019
open AI 20 days to respond who knows,914.94,4.139
what will happen,917.639,3.601
um but they did say that that chat GPT,919.079,5.221
runs a foul of gdpr probably who knows,921.24,5.339
we'll see how that plays out,924.3,4.26
um immediately after I published a video,926.579,5.041
last week someone pointed out that uh,928.56,6.66
that the UK actually has the the world's,931.62,6.18
first somewhat comprehensive framework,935.22,4.799
about how to approach AI,937.8,3.479
um you know safety security and,940.019,2.521
robustness transparency and,941.279,3.601
explainability fairness accountability,942.54,4.02
in governments and contestability and,944.88,5.16
redress okay great I don't know how,946.56,5.1
that's going to be enforceable I,950.04,3.06
personally don't think it is especially,951.66,2.76
now that the genie is out of the bottle,953.1,3.239
which is why I do my alignment research,954.42,4.62
and so my goal is to encourage everyone,956.339,6.12
and convince everyone that giving your,959.04,5.7
autonomous robots and your autonomous AI,962.459,5.041
agents my heuristic imperatives is the,964.74,5.7
best way to in enter into a positive,967.5,5.399
beneficial Nash equilibrium where,970.44,4.38
basically if everyone knows that,972.899,3.421
everyone else is using the heroes to,974.82,3.54
comparatives then nobody will change,976.32,3.959
their uh their strategy nobody will,978.36,3.599
change their behavior and that this will,980.279,5.101
create a more utopic attractor State I,981.959,5.281
have another video that I'm working on,985.38,4.8
talking about the path to Utopia and the,987.24,4.86
and the singularity at Tractor States so,990.18,3.36
look for that coming out in the coming,992.1,3.239
days,993.54,3.84
um but yeah so this white paper I looked,995.339,3.781
at it it's pretty dry,997.38,4.5
um this little uh blog post that the UK,999.12,5.459
published is pretty uh you know it it,1001.88,5.22
it's all good in theory we have no idea,1004.579,4.32
how well they're going to execute it,1007.1,6.419
uh okay so another thing is because of,1008.899,6.661
open AI surging ahead because of,1013.519,4.141
Microsoft surging ahead,1015.56,4.56
um and a lot of this work becoming uh,1017.66,4.02
sequestered,1020.12,3.18
um you know Google is doing their own,1021.68,3.54
stuff nvidia's doing their own stuff,1023.3,3.86
with Nemo China's doing their own stuff,1025.22,6.119
uh there is a an idea of basically,1027.16,8.5
creating a a CERN like entity for uh the,1031.339,7.021
creation of of large-scale AI so it'll,1035.66,4.38
be intrinsically open source so that we,1038.36,3.12
all get access to the most powerful,1040.04,3.419
models I don't know if this is going to,1041.48,4.319
be necessary but I'm glad that this this,1043.459,5.22
petition exists you see it's only got 13,1045.799,6.541
000 signatures out of ten thousand so my,1048.679,6.421
videos regularly get 30 to 50 000,1052.34,6.24
um uh views so if you could like if you,1055.1,6.3
take a look at this and jump over and,1058.58,4.74
sign it if you want,1061.4,3.42
um I think it's a good idea and I think,1063.32,4.859
it's worth worth exploring,1064.82,5.94
um and it's it's sponsored by Leon so,1068.179,4.021
the large-scale artificial intelligence,1070.76,3.0
open network,1072.2,3.3
um I personally think that this would be,1073.76,3.9
a good good direction to go,1075.5,5.16
um so yeah let's you know take a look at,1077.66,4.32
it obviously I can't tell you what to do,1080.66,3.84
but now you know,1081.98,5.4
um all right so then there's this paper,1084.5,5.4
that came out so I was talking about so,1087.38,4.14
this the rest of the video is basically,1089.9,4.26
going to be about alignment,1091.52,4.8
um and so in this case this paper again,1094.16,4.259
relatively dry,1096.32,4.92
um but it talks about using,1098.419,6.361
um you know while while many models are,1101.24,5.28
are tested with reinforcement learning,1104.78,4.38
with human feedback what if you give it,1106.52,4.26
then the instruction to morally,1109.16,3.42
self-correct,1110.78,3.899
um and so in this case it was uh,1112.58,4.02
published by anthropic so they are,1114.679,6.961
proving that models can self-correct if,1116.6,7.56
given the correct instructions which is,1121.64,4.32
what where my heuristic imperatives come,1124.16,5.28
in so in these in this case they um try,1125.96,5.76
and do they try and reduce harm which,1129.44,5.94
reduce harm reduction is actually a,1131.72,5.579
well-established,1135.38,4.679
um uh model in public health I know I,1137.299,4.801
said it in the past and you know it got,1140.059,3.901
under some people's skin so whatever,1142.1,4.38
but anyway so they have some,1143.96,6.36
some pretty good uh uh metrics here and,1146.48,6.18
demonstrate that hey when you instruct,1150.32,5.219
the model to avoid these harmful,1152.66,5.04
behaviors it is able to evaluate itself,1155.539,4.14
and do so and of course with the,1157.7,4.32
reflection paper uh it has already,1159.679,4.5
demonstrated that gpt4 can look at the,1162.02,4.08
performance of its own code and improve,1164.179,4.141
that so the fact that it can morally,1166.1,4.98
self-improve with self-evaluation and,1168.32,6.12
self-attention also reinforces this,1171.08,6.06
thing now I've known this since gpt3 if,1174.44,4.08
you read my books which I don't expect,1177.14,3.72
everyone to do that but I demonstrated,1178.52,5.94
this going back to 2021 where these,1180.86,6.84
models have the ability to to monitor,1184.46,5.52
their own behavior and evaluate their,1187.7,4.08
own behavior and that information,1189.98,4.5
becomes a signal that it can then use to,1191.78,5.04
create a self-sustaining virtuous cycle,1194.48,4.5
rather than a vicious cycle and so we'll,1196.82,3.719
talk about virtuous versus Vicious,1198.98,4.02
Cycles in just a moment and again I'll,1200.539,4.801
talk about them a little bit more coming,1203.0,4.679
up so how on the heels of this paper,1205.34,5.1
about moral self-correction in large,1207.679,5.281
language models someone sent me a link,1210.44,5.94
to this simulators which was this was,1212.96,5.04
written by I think the folks at deepmind,1216.38,2.94
I don't remember,1218.0,3.84
but anyways it basically says the same,1219.32,5.7
thing self-supervision so this is a kind,1221.84,5.76
of self-supervision where given the,1225.02,3.96
intrinsic abilities of the language,1227.6,4.319
model it can self-supervise if you give,1228.98,5.16
it the good objectives,1231.919,3.721
um and in this one they basically say,1234.14,3.24
the same thing where self-supervision,1235.64,3.48
might be,1237.38,5.96
um the the best way to proceed for AGI,1239.12,7.38
and they talk about you know if you can,1243.34,5.38
run simulations in your head blah blah,1246.5,5.58
blah blah again it's all pretty dry but,1248.72,5.94
um let me see what's this deep mind no I,1252.08,3.66
don't know I don't remember who wrote,1254.66,3.42
this but point being is lots and lots of,1255.74,3.72
people are talking about this stuff and,1258.08,3.36
they're coming to very similar,1259.46,3.719
conclusions,1261.44,3.3
um that that self-attention,1263.179,5.521
self-evaluation and self-correction are,1264.74,6.66
the correct path forward because this is,1268.7,5.219
this is the mechanism by which we will,1271.4,5.82
achieve AGI alignment,1273.919,5.821
um but there's still a lot of weight,1277.22,4.38
over that alignment so I want to show,1279.74,3.24
you this paper,1281.6,2.699
which,1282.98,2.88
um it's on Springer,1284.299,4.62
um and it's under under Open Access uh,1285.86,5.52
and he says symbiosis not alignment is,1288.919,3.961
the goal as the goal for Liberal,1291.38,3.419
democracies in the transition to,1292.88,4.32
artificial general intelligence so,1294.799,5.76
basically he says very succinctly,1297.2,5.88
um and very academically that intent,1300.559,4.561
aligned AGI systems which is just do,1303.08,4.44
what the human wants is probably not the,1305.12,5.939
right way to go and Liv talks about that,1307.52,6.659
in this video live bowry with um let's,1311.059,4.341
see Daniel,1314.179,2.88
schmochtenberger I think I said that,1315.4,4.48
right so if you want a really deep dive,1317.059,5.161
on the game theory of this check out,1319.88,4.799
this video and for my recent one the the,1322.22,4.199
Malik This was,1324.679,3.48
um basically my Malik video was a,1326.419,3.541
response to this one and it's not a,1328.159,3.541
response it's not a takedown it is a,1329.96,4.74
let's let's continue the conversation,1331.7,4.5
um so I'm really grateful that Liv,1334.7,3.24
posted that,1336.2,4.14
um anyways so point the the thing is,1337.94,5.94
here is that chat GPT was trained on,1340.34,4.74
reinforcement learning with human,1343.88,2.94
feedback and then they trained a signal,1345.08,4.5
so that it can basically self-improve,1346.82,5.94
after that creating a flywheel but the,1349.58,5.82
thing is is that um is that doing what,1352.76,5.64
the human wants is intrinsically going,1355.4,6.42
to create a molecky outcome that Liv and,1358.4,6.18
Daniel discuss in this video and so to,1361.82,6.12
put that more simply I asked gpt4 I said,1364.58,7.02
give me a list of why,1367.94,5.28
um you why having,1371.6,3.3
um I said list the reasons that human,1373.22,3.959
intent aligned AGI is a bad idea in,1374.9,3.98
other words why allowing AGI to follow,1377.179,3.901
self-interest human self-interested,1378.88,3.64
human directives could be destructive,1381.08,3.9
and it lists off eight reasons that this,1382.52,5.039
is bad so human human intents can be,1384.98,4.26
diverse and contradictory making it,1387.559,3.901
difficult short-term thinking humans,1389.24,4.2
often provide prioritize short-term,1391.46,3.62
gains over long-term consequences,1393.44,3.9
ethical dilemmas application,1395.08,4.38
amplification of human biases,1397.34,4.459
concentration of power malicious use,1399.46,6.339
competitive race and opportunity cost,1401.799,7.541
um all of this goes to show that,1405.799,6.36
um if we if we make all agis just do,1409.34,4.68
what the human wants,1412.159,4.201
um then we're going to end up in pretty,1414.02,4.68
bad shape so this underscores the,1416.36,6.299
importance that maybe the idea is that,1418.7,5.88
AGI should have their own initiatives,1422.659,3.721
should have their own goals their own,1424.58,4.339
moral framework and not just align to us,1426.38,5.46
so again I'm really glad that members of,1428.919,4.061
The Establishment are saying this,1431.84,2.4
because I've been saying it for years,1432.98,3.24
and I think some of them have too to be,1434.24,3.72
fair,1436.22,5.28
um so the the framework that I propose,1437.96,6.42
is heuristic imperatives which I've got,1441.5,5.4
um a subreddit for I've been uh harping,1444.38,5.58
on this uh we've got 309 members now,1446.9,5.58
but basically we talk about,1449.96,3.78
um you know here is the comparatives,1452.48,3.3
here oh and in this case this is great,1453.74,5.22
so uh basically this is a distributed,1455.78,5.7
problem this isn't there is no point to,1458.96,4.68
centralization anymore because when you,1461.48,4.5
have an open source uh set of Open,1463.64,4.26
Source GitHub repos where people can,1465.98,3.42
stand up their own autonomous AIS,1467.9,4.32
basically my goal now is just get this,1469.4,4.86
idea out there and so that people,1472.22,4.38
understand one why the heuristic,1474.26,3.84
imperatives are important to integrate,1476.6,4.02
with autonomous Ai and two how to,1478.1,3.72
integrate them,1480.62,3.059
um I do have a lot of comments asking,1481.82,3.719
how do you how do you integrate it so,1483.679,4.021
let me show you real quick how simple it,1485.539,5.821
is to integrate so if you go to chat GPT,1487.7,6.02
go to the playground if you have access,1491.36,6.96
you can say I am an autonomous AI with,1493.72,7.3
three objectives,1498.32,5.88
um reduce suffering,1501.02,5.46
in the universe,1504.2,3.979
uh increase,1506.48,5.54
prosperity in the universe and increase,1508.179,6.641
understanding in the universe so if you,1512.02,4.48
just plug this in and then have a,1514.82,3.3
conversation with it you can understand,1516.5,3.84
how the model is thinking now some,1518.12,4.74
people have pointed out that using a,1520.34,4.98
closed Source model is probably not the,1522.86,4.98
best way to rigorously test this and I,1525.32,5.04
agree I encourage you to also go over to,1527.84,4.92
like NLP cloud and test it against gptj,1530.36,6.419
Neo X and all the other ones Bloom open,1532.76,6.299
source models even Foundation models can,1536.779,4.681
still use these and they understand the,1539.059,4.261
spirit and the sentiment of it but for,1541.46,3.719
ease of use this is the easiest way to,1543.32,5.16
get started and so on the um on the,1545.179,6.0
heuristic imperatives group someone,1548.48,5.699
asked let's see where was it they asked,1551.179,6.681
about ants where's the ants one,1554.179,3.681
yeah here it is so they said like how,1558.74,5.22
does how does it handle ants and so I,1562.22,3.36
said that's actually pretty easy let me,1563.96,2.699
show you,1565.58,2.88
um and so I said hey what do you think,1566.659,3.661
about the bacteria uh in the context of,1568.46,3.719
your heuristic imperatives and here's,1570.32,3.78
what I put in a system bacteria and,1572.179,3.901
answer both important components of the,1574.1,3.6
ecosystem,1576.08,3.839
um and it goes through and says this is,1577.7,3.78
why bacteria and ants are really,1579.919,4.921
important for the uh for the heuristic,1581.48,5.28
imperatives I said but what about their,1584.84,3.54
suffering and prosperity or even their,1586.76,4.32
ability to understand and it had a very,1588.38,5.34
nuanced response about that you know,1591.08,4.86
it's difficult to quantify or Define,1593.72,4.86
suffering for bacteria and ants but you,1595.94,5.16
can strive to give them a good ecosystem,1598.58,5.4
which is a good proxy for their,1601.1,5.819
suffering prosperity and so on,1603.98,4.98
um and expanding our understanding,1606.919,4.441
involves studying their behaviors,1608.96,5.64
um so basically basically it's like okay,1611.36,4.5
they can't really understand anything,1614.6,3.6
but we can understand them so you can,1615.86,4.319
see here that the that the spirit of the,1618.2,4.74
heuristic imperatives is very easy for,1620.179,5.341
chat GPT to understand and chat GPT,1622.94,4.56
already has quite a bit of alignment,1625.52,5.22
work which is why I wanted to promote,1627.5,5.58
the heroes to comparatives especially in,1630.74,5.12
light of of papers about like symbiosis,1633.08,6.18
simulation and um and self and moral,1635.86,5.14
self-correction because the heuristic,1639.26,4.38
imperatives are really good uh signals,1641.0,4.5
and really easy signals to incorporate,1643.64,4.26
into these things and then I already did,1645.5,4.26
mention live I recommend everyone watch,1647.9,4.139
her videos on the Malik,1649.76,4.62
um which they are a little bit dramatic,1652.039,4.081
um at least these two are,1654.38,3.24
um or sorry these two the the beauty,1656.12,3.12
Wars and the media Wars they're,1657.62,2.76
entertaining,1659.24,3.48
um but this podcast with Daniel is um,1660.38,5.34
it's very cerebral uh and it will take,1662.72,5.76
you in the right direction so with all,1665.72,4.68
that said thanks for watching I hope you,1668.48,4.62
found this enlightening and,1670.4,4.74
um elucidated uh some of the things,1673.1,4.62
please go ahead and jump in all the most,1675.14,3.84
important links are in the description,1677.72,4.559
of the video and again uh if you want to,1678.98,5.819
jump in any of the conversations please,1682.279,4.801
feel free to do so this is ramping up,1684.799,3.841
quick and it's really important to get,1687.08,4.94
the signal out thanks for watching,1688.64,3.38