davidshapiro_youtube_transcripts / The AGI Moloch Nash Equilibrium Attractor States and Heuristic Imperatives How to Achieve Utopia_transcript.csv
Stevross's picture
Upload 50 files
421fea8
raw
history blame contribute delete
No virus
53.6 kB
text,start,duration
morning everybody David Shapiro here,0.719,4.56
with another video today's video is,3.179,4.561
going to be really exciting,5.279,3.961
um we're going to discuss the Malik,7.74,3.0
problem,9.24,3.779
um otherwise known as undesirable Nash,10.74,5.94
equilibria and attractor States or to,13.019,6.421
put it more simply dystopia or,16.68,5.519
Extinction in the context of artificial,19.44,4.8
general intelligence,22.199,5.221
all right so first we probably need to,24.24,6.84
Define maluk this is a concept that has,27.42,5.52
been popularized by the likes of Liv,31.08,3.9
Bowie I'm probably saying her name wrong,32.94,4.619
she was on Lex Friedman,34.98,5.16
um and also a lot of people reacted uh,37.559,5.641
pretty positively to my last video aegi,40.14,4.28
Unleashed,43.2,4.26
and so following the trend and the,44.42,5.319
conversation of course there's lots of,47.46,4.98
people out there talking about these uh,49.739,6.121
the net the alleged inevitability of,52.44,7.74
these negative outcomes so Malik to put,55.86,6.719
it very simply,60.18,6.72
um is a is a is a situation where the,62.579,6.841
system itself the rules structures,66.9,5.3
incentives and constraints of a system,69.42,4.92
intrinsically and inevitably flow,72.2,4.779
towards undesirable lose-lose States or,74.34,4.639
negative Nash equilibria,76.979,4.981
it was inspired by a demon that demands,78.979,4.78
sacrifices and it creates a vicious,81.96,4.08
cycle of more sacrifices,83.759,6.061
so a few examples of the Malik and I put,86.04,5.1
it in scare quotes because I don't,89.82,2.82
particularly like the term even though,91.14,6.119
it is useful so social media is is one,92.64,6.299
example of the molec and if you want to,97.259,3.72
know more about that watch live Bowie's,98.939,4.281
videos about media and social media,100.979,5.221
they're pretty short they're about 15 20,103.22,6.219
minutes each but basically social media,106.2,6.66
is pretty universally harmful it does,109.439,5.341
very few good things and yet people,112.86,4.86
continue to use it they're addictive and,114.78,4.74
it's just a monster that keeps wanting,117.72,4.92
to eat more and more of your time and it,119.52,6.3
is not particularly helpful that being,122.64,5.52
said we keep using it because there are,125.82,5.04
a few benefits of social media for,128.16,4.32
instance YouTube YouTube is a form of,130.86,4.739
social media but the the cost to benefit,132.48,6.119
signal is pretty bad another example of,135.599,5.461
the Malik is the example of arms races,138.599,4.921
whether it's nuclear proliferation bio,141.06,4.02
weapons other weapons of mass,143.52,4.02
destruction and so on so forth basically,145.08,4.44
nobody really wants to live in a world,147.54,4.8
where there are thousands and thousands,149.52,4.799
of nuclear weapons and bio weapons and,152.34,4.92
other weapons of mass destruction yet we,154.319,4.621
live in that world because of the,157.26,4.199
incentive structure and the technology,158.94,5.1
basically makes it an inevitability,161.459,4.321
and when you live in the world where,164.04,3.96
peop where we are literally like a few,165.78,3.959
button pushes away from the destruction,168.0,3.78
of the entire human race that is not a,169.739,3.961
good situation to be in,171.78,4.02
um and then finally most commonly the,173.7,4.08
tragedy of the commons which is,175.8,4.56
basically uh you end up with,177.78,4.16
environmental depletion and destruction,180.36,4.739
due to the incentives to exploit the,181.94,4.6
environment,185.099,3.181
um for a number of reasons and we'll,186.54,4.32
talk about uh Malik in more objective,188.28,5.28
terms in just a moment but,190.86,3.959
um you might have noticed that none of,193.56,2.88
my videos have ads and that is because,194.819,4.681
my videos are all sponsored by you my,196.44,4.98
patreon supporters,199.5,3.599
um so if you like what I'm working on,201.42,5.039
you want to incentivize my behavior then,203.099,5.521
please support me uh financially on,206.459,5.28
patreon go ahead and jump over if you go,208.62,5.699
to a higher tier I'm happy to chat with,211.739,4.801
you on an individual basis either via,214.319,5.041
patreon chat I'll even hop on video,216.54,4.259
calls,219.36,3.959
um so yeah that is uh that is the plug,220.799,5.401
and moving right back into the show,223.319,6.601
okay so when you listen to people talk,226.2,6.599
about Malik it sounds like some kind of,229.92,6.179
Eldritch Horror like Cthulhu,232.799,5.58
um there are a few big names out there,236.099,3.661
right now,238.379,3.0
um I don't particularly agree with them,239.76,3.78
so I'm not going to call anyone out but,241.379,4.44
there are people that think that you,243.54,4.08
know we're all gonna die it's inevitable,245.819,4.081
just give it give up now throw in the,247.62,3.66
towel,249.9,5.28
um so rather than give this phenomenon a,251.28,5.82
big spooky scary name that makes it,255.18,4.86
sound like Cthulhu let's break down the,257.1,5.099
characteristics of Malik into more,260.04,4.86
conventional terms so specifically we're,262.199,4.801
going to talk about market theory and,264.9,4.98
Game Theory and describe the Malik in,267.0,7.02
those uh those terms so first is,269.88,6.36
perverse incentives so a perverse,274.02,3.959
incentive is,276.24,5.34
um is a systemic or structural rule or,277.979,6.361
Paradigm that creates behaviors that run,281.58,5.04
contrary to the intended goals or,284.34,4.139
desired States,286.62,4.2
um for instance with social media the,288.479,4.201
the perverse incentive is that you end,290.82,4.14
up Doom scrolling which makes you you,292.68,3.66
wanted to use social media to get,294.96,3.06
happier but you end up Doom scrolling,296.34,4.02
and because the system incentivized that,298.02,4.619
incentivizes that behavior which results,300.36,4.86
in more anxiety depression rage and so,302.639,6.601
on so perverse incentives also exist in,305.22,5.12
the wide world,309.24,3.78
dealing with like corn subsidies oil,310.34,5.5
subsidies all sorts of stuff if you want,313.02,4.619
more examples just Google it it like,315.84,3.72
there's thousands and thousands of,317.639,3.721
examples of perverse incentives it,319.56,4.32
extends into Education Health Care all,321.36,3.54
kinds of stuff,323.88,3.42
Market externalities so Market,324.9,5.76
externality is a is a situation where a,327.3,6.179
market Behavior does not price in or the,330.66,5.22
the market price does not reflect the,333.479,5.461
true and total cost uh or benefit of,335.88,5.94
something so in in some cases there are,338.94,5.28
positive Market externalities for,341.82,5.219
instance the cost of vaccination or,344.22,5.819
public health campaigns is often much,347.039,5.581
lower than the overall benefit you get,350.039,4.741
knock on positive effects now that being,352.62,3.54
said there are also negative Market,354.78,4.02
externalities such as pollution and,356.16,4.74
environmental degradation in other words,358.8,4.44
the cost of cutting down a tree and,360.9,4.859
selling that tree is much lower than the,363.24,4.2
total cost of the impact of the,365.759,4.141
environment but because the environment,367.44,4.979
is so huge and it is a large dynamic,369.9,4.859
system it is difficult to price that in,372.419,6.301
without regulations and other things,374.759,5.94
so perverse incentives and Market,378.72,3.72
externalities these are Market Theory,380.699,4.261
Concepts that contribute to the the,382.44,4.319
concept of the molec that's not the,384.96,4.079
whole whole picture,386.759,4.861
um an undesirable Nash equilibrium is a,389.039,5.641
situation where no uh stakeholder or,391.62,4.98
participant is incentivized to alter,394.68,3.42
their behavior in other words they are,396.6,4.02
using their optimal strategy and yet,398.1,3.96
though everyone is using their own,400.62,3.72
optimal strategy it will still result in,402.06,4.5
a net loss or undesirable outcomes for,404.34,4.919
all participants anyways,406.56,5.22
um so basically dystopia and then,409.259,4.5
finally an undesirable attractor state,411.78,4.56
which is the ultimate steady state or,413.759,4.38
stable state that a system will result,416.34,4.139
in given the existing structures and,418.139,6.12
rules even though if it's an undesirable,420.479,5.821
attractor State it's it's an outcome,424.259,4.141
that nobody really wants even if that,426.3,4.2
outcome seems inevitable,428.4,3.66
so again like I said I don't,430.5,3.18
particularly like the term Malik because,432.06,3.96
it's big and spooky and scary but it is,433.68,5.1
a useful shorthand to basically say the,436.02,4.32
set of perverse incentives and Market,438.78,3.359
externalities,440.34,3.78
um and and everything else that goes,442.139,4.56
into the market theory economic theory,444.12,5.479
and game theory of this of any system,446.699,5.881
could be negative so it's basically the,449.599,4.72
the monster,452.58,4.98
okay so I've talked a lot about,454.319,6.421
um incentives and constraints and so,457.56,6.12
what I did was I worked to identify all,460.74,4.859
of the kind of groups or the categories,463.68,3.84
of stakeholders,465.599,4.081
um and and also to elucidate their,467.52,3.6
incentives and constraints and keep in,469.68,3.66
mind that the slide deck is a very uh,471.12,4.019
concise shorthand,473.34,4.32
um for the paper that I'm working on,475.139,5.581
um so but anyways corporations their,477.66,5.22
primary incentive is to maximize profit,480.72,4.68
and their biggest constraint is the law,482.88,5.64
regulations so on and so forth for the,485.4,4.799
military they want to maximize their,488.52,4.14
Firepower and their biggest constraint,490.199,4.821
is geopolitics AKA,492.66,5.879
their military competitors as well as uh,495.02,6.359
political uh constraints for governments,498.539,5.581
governments have a multi-polar set of,501.379,4.54
incentives right they might want to,504.12,4.019
maximize tax revenue but they also want,505.919,3.96
to maximize,508.139,4.801
um you know certain demographic uh uh,509.879,6.181
priorities economic priorities GDP so on,512.94,5.099
and so forth so governments have multi,516.06,3.599
multi-polar incentives and the,518.039,4.141
constraint is actually part of the,519.659,4.081
incentive structure which is the,522.18,3.12
citizenry,523.74,3.96
um citizens have certain limits right we,525.3,4.2
can only work so much we can only have,527.7,3.48
so much output,529.5,3.66
um and another major constraint for,531.18,4.68
governments is the natural resources of,533.16,4.679
the land that they control and then for,535.86,3.599
individuals we all want to maximize,537.839,3.481
self-interest this is an accepted,539.459,6.0
Paradigm in uh economic theory today and,541.32,6.9
but our constraints are uh multi-polar,545.459,4.921
our constraints are you know time in the,548.22,5.1
day physical energy food money,550.38,5.459
um the the reach of our individual,553.32,4.68
connections and our networks so on and,555.839,4.68
so forth so we individuals have like the,558.0,5.1
most open-ended incentive but we also,560.519,4.44
have the most constraints,563.1,3.12
um so this is just one way to think,564.959,3.661
about okay all of the stakeholders in,566.22,4.799
the entire Globe have these different,568.62,4.5
incentives and constraints and we're all,571.019,3.721
playing on the same stage which is,573.12,3.42
planet Earth,574.74,4.62
so given how big and dynamic the world,576.54,6.06
is it's not really possible to achieve a,579.36,6.06
true Nash equilibrium Because by the,582.6,5.34
time something happens in one area and,585.42,4.5
all the effects are fully known and it's,587.94,4.26
fully embedded into the market the,589.92,4.8
situation will have changed that being,592.2,3.56
said,594.72,4.799
there are large forces that are pushing,595.76,6.639
us towards certain equilibrium so for,599.519,5.601
instance the justice system,602.399,4.861
disincentivizes certain behaviors like,605.12,3.94
theft and murder to get what you want,607.26,4.56
and so part of our equilibrium our,609.06,5.219
individual Nash equilibrium is that we,611.82,4.26
pay our taxes we don't kill we don't,614.279,4.441
steal etc etc because it does not,616.08,5.0
benefit us to deviate from that strategy,618.72,5.94
likewise corporations fall into Nash,621.08,5.74
equilibrium where by and large they,624.66,3.78
don't abuse their employees within,626.82,4.019
reason they don't abuse the environment,628.44,5.399
within reason they don't engage in you,630.839,4.861
know theft and Corruption within reason,633.839,4.801
again the constraints are there but,635.7,4.92
corporations are constantly testing,638.64,4.259
their boundaries but by and large,640.62,4.92
corporations will play Within the rules,642.899,4.141
that are given to them,645.54,4.08
so because when you look at that ditto,647.04,4.68
for governments and militaries,649.62,5.1
um because we are all operating with our,651.72,5.1
incentives our intrinsic motivations or,654.72,3.54
our incentives as well as those,656.82,3.9
constraints we all kind of fall into an,658.26,5.4
optimal strategy now that being said the,660.72,4.5
optimal strategy for all of the,663.66,4.26
stakeholders globally is presently still,665.22,5.22
moving us towards dystopia towards the,667.92,3.84
attractor State the undesirable,670.44,4.56
attractor state of dystopia however that,671.76,5.04
being said we all want to move towards,675.0,3.66
Utopia right and this has happened,676.8,3.659
plenty times in the past the Roman,678.66,4.02
Empire collapsed plenty of other empires,680.459,4.801
have collapsed even though nobody well,682.68,4.56
many people didn't want it but some,685.26,3.48
people did,687.24,3.48
um and one thing I do want to add as a,688.74,4.62
caveat is a lot of this is a huge,690.72,5.22
oversimplification I've spent basically,693.36,5.099
the last 36 hours almost straight except,695.94,4.74
for sleeping learning about this stuff,698.459,4.141
because I realized how important it was,700.68,4.5
so changing the ultimate attractor State,702.6,4.859
moving changing it from the current,705.18,5.339
dystopic trajectory that we're on to a,707.459,5.041
more utopic trajectory requires,710.519,4.32
structural changes to the whole system,712.5,4.86
basically don't hate the player change,714.839,4.201
the game,717.36,3.719
okay so I've mentioned this a couple,719.04,3.72
times added this slide in just in case,721.079,2.94
you're not familiar with the Nash,722.76,3.84
equilibrium the tldr of the Nash,724.019,4.921
equilibrium is that,726.6,4.14
um you assume that all players in a game,728.94,3.72
are rational and they choose the best,730.74,4.74
strategy given the rules of the game and,732.66,4.26
the and the behavior of the other,735.48,3.0
players,736.92,3.659
um the idea is that is that a Nash,738.48,5.22
equilibrium is a stable outcome in which,740.579,5.341
no player will benefit from changing,743.7,4.62
their strategy now that being said you,745.92,4.2
can have a desirable Nash equilibrium,748.32,3.959
where everyone is cooperative and,750.12,4.26
everyone is benefiting or you can have a,752.279,4.261
negative Nash equilibrium where,754.38,4.38
basically everyone loses and then you,756.54,3.78
can also have a zero-sum game where you,758.76,4.56
have winners and losers so the very very,760.32,5.22
oversimplified tldr is a Nash,763.32,4.68
equilibrium can result in a win-win a,765.54,6.0
lose-lose or a win-lose right now it,768.0,6.06
looks like the Nash equilibrium of the,771.54,5.16
whole world is heading towards lose some,774.06,4.44
people believe that it is intrinsically,776.7,4.34
win lose that there's winners and losers,778.5,4.8
I personally believe that we can head,781.04,4.06
towards a win-win situation and I think,783.3,3.0
that,785.1,3.12
when people are being honest most people,786.3,3.9
want a win-win situation it's just,788.22,4.98
there's a sense of fatalism or a belief,790.2,6.24
that it's not possible and so if you,793.2,5.04
honestly believe that win-win is not,796.44,3.42
possible then maybe you default to win,798.24,3.18
lose where well I don't mind if everyone,799.86,4.26
else loses as long as I win but what I'm,801.42,5.099
going to try and do is help you,804.12,3.659
understand and help the world move,806.519,3.661
towards a belief and Adoption of a,807.779,4.62
win-win mentality,810.18,5.88
okay so there are a couple of existing,812.399,6.24
mitigation strategies,816.06,3.899
um that people are trying to use to,818.639,4.081
avoid the dystopic or Extinction out,819.959,7.021
outcome so you know whether uh when,822.72,5.64
you're looking at it in terms of attract,826.98,3.44
attractor States,828.36,4.68
dystopia is one where basically everyone,830.42,5.74
is miserable right or outright,833.04,4.979
Extinction that's another possible,836.16,3.72
attractor State because if humans go,838.019,4.32
extinct then the world returns to,839.88,5.94
stability without us right so it's it it,842.339,5.461
would be irresponsible to say that that,845.82,3.78
neither of those outcomes are possible,847.8,3.36
or likely I'm not going to comment on,849.6,3.84
How likely they are but what I will say,851.16,4.739
is that they are both possible and right,853.44,4.019
now as I mentioned in the last slide,855.899,4.921
people believe that Utopia is just not,857.459,6.241
possible so why even go for it,860.82,5.28
um so mutually assured destruction is an,863.7,4.02
example of,866.1,3.9
um an equilibrium right so an,867.72,5.16
equilibrium where hey we all have the,870.0,4.38
ability to kill each other so nobody,872.88,3.0
make a move,874.38,3.84
um and uh what was the movie The uh the,875.88,3.84
one with Brad Pitt where they're in Nazi,878.22,2.88
Germany you know he called it a Mexican,879.72,3.239
standoff,881.1,3.96
um actually I probably shouldn't have,882.959,3.541
said that that's probably an offensive,885.06,3.779
term anyways mutually assured,886.5,4.32
destruction it's a well-known Doctrine,888.839,4.081
where basically there's a milli there's,890.82,4.319
a nuclear buildup on both sides so,892.92,4.919
nobody pulls the trigger,895.139,3.841
um that's on the military and,897.839,3.3
geopolitical stage in terms of,898.98,5.219
capitalism and Market Theory the current,901.139,5.101
uh Paradigm that is popular is called,904.199,4.561
stakeholder capitalism so stakeholder,906.24,5.279
capitalism is the idea that rather than,908.76,4.379
just trying to,911.519,3.841
um it replaces shareholder capitalism so,913.139,3.741
shareholder capitalism,915.36,4.02
prioritizes only the shareholders and,916.88,4.6
their desires which forces corporations,919.38,5.1
to maximize profit at the expense of,921.48,4.979
everything else with stakeholder,924.48,5.58
capitalism the idea is to um is to,926.459,5.341
basically treat the entire world as your,930.06,4.1
stakeholders which includes,931.8,4.38
private citizens that are not your,934.16,5.2
customer the employees all over the,936.18,5.459
world governments as well as the,939.36,4.38
environment so this is called ESG this,941.639,3.981
is uh promoted by BlackRock which is,943.74,4.32
environmental social and governance so,945.62,4.06
that's basically a litmus test that,948.06,4.98
BlackRock uses for investment and then a,949.68,5.88
more General way of looking at this is,953.04,5.22
called uh the triple bottom line Theory,955.56,5.279
or doctrine which basically says that um,958.26,5.699
that that on top of economic incentives,960.839,5.881
you should also include environmental,963.959,4.74
and social and uh incentives or,966.72,4.44
considerations but all of these are,968.699,5.521
broadly types of stakeholder capitalism,971.16,6.479
so both of these uh doctrines or ideas,974.22,6.419
attempt to create a more desirable Nash,977.639,6.241
equilibrium so in the in the case of uh,980.639,4.621
mutually assured destruction the,983.88,4.079
equilibrium is we will we will maintain,985.26,5.1
a nuclear Arsenal but we won't use it,987.959,4.801
that is the optimal strategy in the case,990.36,5.279
of stakeholder capitalism the idea is we,992.76,5.04
will adopt a broad array of behaviors,995.639,4.94
that mean that we don't abuse employees,997.8,5.52
suppliers or the environment while still,1000.579,5.141
making profit that is the goal now I,1003.32,4.139
will say that both of these have very,1005.72,4.32
very deep flaws which would take many,1007.459,6.3
many videos to unpack but you know I,1010.04,5.34
think you get the idea these are the,1013.759,4.801
current attempts that are stable-ish,1015.38,5.399
right now in working-ish right now but,1018.56,4.8
might also still be pushing us towards a,1020.779,4.981
dystopian outcome even if they are,1023.36,5.76
currently stable enough,1025.76,4.919
now,1029.12,4.16
technology as a destabilizer,1030.679,5.12
technological leaps have always,1033.28,5.08
destabilized the system starting with,1035.799,4.66
the printing press which which led to,1038.36,5.4
religious and economic and uh political,1040.459,4.74
upheaval,1043.76,3.9
um looking at you uh Martin Luther and,1045.199,4.201
French Revolution,1047.66,3.54
um then the Industrial Revolution which,1049.4,4.62
led to huge social upheaval with,1051.2,5.52
urbanization factories and the,1054.02,4.58
dislocation of many jobs,1056.72,4.319
which the Industrial Revolution also,1058.6,4.84
contributed directly to World War one,1061.039,3.721
and two because those were the first,1063.44,4.44
industrial scale Wars nuclear weapons,1064.76,6.419
internet silicon all of the above lead,1067.88,7.2
to destabilization AGI or autonomous AI,1071.179,6.0
systems no different it's just another,1075.08,3.839
technological leap that will cause that,1077.179,4.141
will destabilize everything again,1078.919,4.021
and it's pretty much a foregone,1081.32,5.88
conclusion that the advancement of AI is,1082.94,8.22
going to destabilize stuff so this uh,1087.2,6.479
forces us to ask questions what is the,1091.16,5.16
new attractor state if,1093.679,5.701
well in in the past the attractor state,1096.32,3.96
was,1099.38,3.84
different because you know technological,1100.28,6.54
uh abilities to affect the world we're,1103.22,5.88
limited right when the world was powered,1106.82,4.32
by coal there was only so much damage we,1109.1,4.56
could do to each other and the world,1111.14,5.1
um but as technology advanced the amount,1113.66,5.1
of damage possible went up so the new,1116.24,4.74
attractor State also changed as well as,1118.76,5.52
all the incentives of participants in,1120.98,4.74
the world and that includes employers,1124.28,4.08
individuals governments militaries so on,1125.72,5.64
technology changes the game changes the,1128.36,6.0
fundamental nature of the Game of Life,1131.36,4.559
or reality or however you want to call,1134.36,2.64
it,1135.919,3.421
um and so the question is okay with the,1137.0,4.5
rise of AGI how does that change the,1139.34,4.5
attractor State and there's as far as I,1141.5,3.78
can tell there's basically three states,1143.84,4.92
there's Utopia dystopia and Extinction,1145.28,5.759
there's probably a lot of gray area in,1148.76,4.14
between and there might be a fourth kind,1151.039,4.02
of state that we're heading towards but,1152.9,4.56
in terms of useful shorthand Utopia,1155.059,5.701
dystopia and Extinction so the follow-up,1157.46,5.339
question is what can we do to alter that,1160.76,4.08
attractor state is there anything that,1162.799,4.141
we can do structurally or systematically,1164.84,5.04
to to favor one of those outcomes over,1166.94,6.3
another and then finally what is the,1169.88,5.76
optimal strategy for each of those kinds,1173.24,3.6
of stakeholders that I mentioned,1175.64,3.48
individuals corporations governments and,1176.84,4.92
militaries to create a new Nash,1179.12,5.58
equilibrium in light of AGI,1181.76,6.5
so basically we need a Nash equilibrium,1184.7,8.28
uh framework for implementing AGI to to,1188.26,7.539
push us towards a desirable or positive,1192.98,5.819
attractor state so all that is a really,1195.799,5.76
complex way of saying we need a plan we,1198.799,6.0
need a plan of in of implementing AGI in,1201.559,5.401
such a way that we will we will Trend,1204.799,4.62
towards Utopia rather than dystopia or,1206.96,4.94
Extinction,1209.419,2.481
okay so with all that in mind what are,1212.2,5.74
some of the success criteria for this,1215.72,4.62
framework what are the goals of this,1217.94,3.72
framework how do we know if this,1220.34,3.9
framework is going to be successful one,1221.66,4.62
it needs to be easy to implement and,1224.24,6.0
understand the reason is because the,1226.28,6.12
ability for individuals at all levels,1230.24,4.14
whether it's individual persons like,1232.4,4.08
myself or corporations or even small,1234.38,6.179
Nations to implement AGI is ramping up I,1236.48,6.84
was on Discord last night and there are,1240.559,4.62
people that just after tinkering for a,1243.32,4.08
few weeks have created fully autonomous,1245.179,3.961
AI systems,1247.4,3.84
and one of the things that we discussed,1249.14,6.06
was okay if me or you or whoever some of,1251.24,5.58
these people are not even coders they,1255.2,4.08
learn to code with chat GPT,1256.82,4.859
if everyone is going to be capable of,1259.28,5.519
creating autonomous AI systems,1261.679,5.521
now and it's only going to ramp up over,1264.799,5.341
the coming months and years then,1267.2,5.099
whatever framework that we come up with,1270.14,3.96
is going to have to be universally,1272.299,4.38
understandable easy to implement and,1274.1,4.26
easy to understand,1276.679,4.141
if it's soteric if it's esoteric sorry,1278.36,4.14
if it's esoteric no one will use it,1280.82,3.42
because they won't understand it,1282.5,3.419
number two,1284.24,4.26
all stakeholders have to be incentivized,1285.919,4.74
to use this framework or in other words,1288.5,4.44
this framework must represent the,1290.659,4.14
optimal strategy so that people won't,1292.94,4.619
deviate from it there's basically,1294.799,4.38
everyone has to benefit from using it,1297.559,3.801
and there has to be compounding returns,1299.179,4.801
incentivizing everyone to say hey you,1301.36,4.299
should be using this framework because,1303.98,3.24
this is the optimal strategy for,1305.659,2.961
everyone,1307.22,3.6
above and beyond that this framework,1308.62,3.939
needs to be adaptable and responsive or,1310.82,4.859
dynamic because again the world changes,1312.559,5.581
and so it's really difficult to create a,1315.679,7.021
framework that is a hard set of rules to,1318.14,7.14
follow which will result in unintended,1322.7,4.8
consequences and instabilities and other,1325.28,4.32
market failures so it has to be context,1327.5,4.799
dependent and changeable over time,1329.6,5.1
number four this framework has to be,1332.299,4.62
inclusive and representative in that it,1334.7,5.099
cannot exclude any stakeholder it cannot,1336.919,5.701
exclude any citizens from Any Nation or,1339.799,4.981
religion it cannot exclude any,1342.62,4.34
Corporation or government or military,1344.78,5.7
because like it or not we all share the,1346.96,5.74
same planet and we are all stakeholders,1350.48,5.52
in this outcome,1352.7,4.859
um and,1356.0,4.08
one thing that I want to address is that,1357.559,5.341
um there have been cases where Nations,1360.08,5.579
agree on like Rules of Engagement and,1362.9,5.1
rules of War like we don't use Napalm,1365.659,4.26
anymore because it was decided that like,1368.0,4.08
okay this is inhumane um or maybe it was,1369.919,4.081
white phosphorus anyways there are,1372.08,3.5
certain kinds of weapons mustard gas,1374.0,4.08
those are things that even though War,1375.58,4.18
Nations might go to war with each other,1378.08,3.599
they still agree not to do certain,1379.76,4.08
things because they understand that the,1381.679,4.261
soldiers are stakeholders as well as the,1383.84,3.42
citizens who might get caught in the,1385.94,4.26
crossfire so there is some precedent of,1387.26,4.86
Nations agreeing on how to conduct War,1390.2,3.9
even though destruction is one of the,1392.12,3.66
goals of War,1394.1,4.26
uh number five this framework has to be,1395.78,4.5
scalable and sustainable it has to,1398.36,4.14
include the entire Globe as well and,1400.28,3.779
that's not just the people on the globe,1402.5,4.5
it has to include the environments uh,1404.059,5.461
and ecosystems around the globe which we,1407.0,5.159
all depend on anyways so I personally,1409.52,4.86
see humans as part of the ecosystem not,1412.159,5.4
up not separate from it and finally this,1414.38,4.799
framework has to be transparent and,1417.559,3.841
trustworthy because perception is,1419.179,5.641
reality right if if people perceive a,1421.4,6.0
framework to be destructive like ESG is,1424.82,4.979
a perfect example the perception of ESG,1427.4,4.8
is awful why because it's championed by,1429.799,4.26
BlackRock which is one of the most I,1432.2,4.26
think it is the wealthiest company on,1434.059,5.581
the planet right and so because ESG is,1436.46,6.0
is championed by you know a,1439.64,5.519
multi-trillion dollar Corporation it is,1442.46,5.28
not trusted and that perception makes it,1445.159,4.081
bad,1447.74,3.36
I don't know whether or not ESG is good,1449.24,3.419
or bad but the perception certainly is,1451.1,2.819
bad,1452.659,3.121
um so transparency and trustworthiness,1453.919,3.421
are critical for the success of this,1455.78,2.639
framework because if people don't trust,1457.34,4.079
that they're not going to use it either,1458.419,5.221
and finally,1461.419,5.701
um so this is where I pitch my work,1463.64,6.24
um so what I my proposed solution to all,1467.12,5.1
of this is what I call the heuristic,1469.88,5.159
imperatives which is a set of rules or,1472.22,4.74
principles that can be incorporated into,1475.039,5.701
AGI systems that will push it into this,1476.96,6.24
uh Direction and so these imperatives,1480.74,4.86
are one reduce suffering in the universe,1483.2,4.38
two increase prosperity in the universe,1485.6,4.199
and three increase understanding in the,1487.58,4.5
universe one way to say this is that it,1489.799,4.081
is a multi-objective optimization,1492.08,2.94
problem,1493.88,3.12
meaning that it's not just one objective,1495.02,4.08
function it's actually three that the,1497.0,5.34
AGI has to work on implementing,1499.1,5.28
so in the last video people asked how do,1502.34,3.36
you implement these it's actually really,1504.38,3.72
really easy you can just plug them in to,1505.7,5.28
chat GPT and talk about it there's a few,1508.1,4.559
places that you can get involved in the,1510.98,4.8
conversation excuse me one is on Reddit,1512.659,5.041
I created a new subreddit called r slash,1515.78,3.899
heuristic comparatives,1517.7,3.54
um people are sharing their work there,1519.679,3.0
so if you want to see the discussion,1521.24,4.319
jump in on that I also have a lot of my,1522.679,5.221
own work up on GitHub I'm including a,1525.559,3.781
few papers that I have written and I'm,1527.9,4.5
working on under um github.com Dave shop,1529.34,5.52
here is to comparatives and then finally,1532.4,4.139
the most active Community to discuss,1534.86,3.78
this stuff is the cognitive AI lab,1536.539,4.081
Discord server which I started over a,1538.64,4.08
year ago and links to all this is in the,1540.62,4.26
description of the video so because of,1542.72,3.74
that I don't want to spend too much time,1544.88,4.5
rehashing stuff but I just wanted to,1546.46,4.599
connect to the conversation because,1549.38,3.98
again transparency and trustworthiness,1551.059,5.521
are really critical to this solution,1553.36,5.74
but let's talk more broadly about this,1556.58,5.52
solution of um the heuristic imperatives,1559.1,5.22
and these success criteria,1562.1,5.4
so we outlined six success criteria for,1564.32,5.88
a framework that will push us towards a,1567.5,4.98
positive Nash equilibrium or a desirable,1570.2,5.459
attractor State AKA Utopia so the,1572.48,4.98
heuristic imperatives as I mentioned are,1575.659,3.421
very easy to implement you can put them,1577.46,4.38
in the chat GPT system window you can,1579.08,4.52
just include them in the conversation,1581.84,5.12
you can use them for evaluation,1583.6,6.52
cognitive control uh historical,1586.96,6.4
self-evaluation planning prioritization,1590.12,6.299
super easy to implement and as I,1593.36,4.439
mentioned lots of people are having the,1596.419,3.781
discussions some of the autonomous AI,1597.799,5.041
entities that people have created,1600.2,4.32
um the the AIS that they created,1602.84,3.12
actually end up usually being really,1604.52,3.659
fascinated by the heroes to comparatives,1605.96,4.319
and they they kind of gravitate towards,1608.179,3.841
them saying like oh yeah this is my,1610.279,3.121
purpose,1612.02,2.639
um so it's really interesting to watch,1613.4,3.3
that work unfold,1614.659,4.081
um number two the stakeholders are all,1616.7,3.599
incentivized to use the heroes to,1618.74,5.039
comparatives because just imagining a,1620.299,5.701
state where you have less suffering more,1623.779,4.14
prosperity and more understanding is,1626.0,4.44
beneficial now above and beyond that the,1627.919,4.021
stakeholders all stakeholders are,1630.44,2.94
incentivized to use the hero's,1631.94,4.8
comparatives because then you have a,1633.38,5.1
level set playing field where you know,1636.74,3.419
that everyone is abiding by the same,1638.48,3.9
rules right because when you have a game,1640.159,4.62
imagine the game Monopoly if someone is,1642.38,3.659
playing by a different set of rules,1644.779,3.0
you're not going to play with them right,1646.039,4.321
even though it's a competition you're,1647.779,4.321
you still say we're going to abide by,1650.36,3.66
the same rules you collect 200 when you,1652.1,2.939
pass go,1654.02,4.8
if on the other hand everyone is playing,1655.039,6.301
by the heuristic imperatives then you,1658.82,4.32
will be incentivized to adhere to those,1661.34,4.199
role those rules knowing that the net,1663.14,4.2
effect is going to be beneficial for,1665.539,3.0
everyone,1667.34,3.24
number three the years to compare,1668.539,4.62
imperatives are adaptable because they,1670.58,4.44
intrinsically incentivize learning and,1673.159,4.441
adaptation with the third uh heuristic,1675.02,4.32
imperative of increased understanding,1677.6,3.9
this is what I also call The Curiosity,1679.34,5.339
function so basically you don't want an,1681.5,5.4
AGI to be dumb and just satisfied with,1684.679,3.841
what it knows about the universe you,1686.9,3.06
also don't want it to be satisfied with,1688.52,3.779
human ignorance so by increasing,1689.96,5.94
understanding that includes uh that one,1692.299,5.88
that intrinsically makes agis curious,1695.9,3.6
which means that they are going to want,1698.179,3.721
to learn and challenge their own beliefs,1699.5,5.1
but likewise they will also encourage,1701.9,5.399
not force but encourage humans to learn,1704.6,4.86
and adapt so the heuristic imperatives,1707.299,4.26
as a system is intrinsically adaptable,1709.46,4.86
because learning and curiosity are baked,1711.559,3.72
in,1714.32,2.82
number four the heuristic imperatives,1715.279,3.78
are inclusive now and all the,1717.14,3.6
experiments that I've done going back to,1719.059,4.86
gpt3 and now gpt4,1720.74,5.28
um language models already understand,1723.919,4.981
the spirit or the intention of the,1726.02,4.98
heuristic imperatives in that they,1728.9,4.139
should be all-inclusive,1731.0,4.08
um and so that makes them very very,1733.039,5.161
context dependent so for instance if you,1735.08,4.74
um plug in the heuristic comparatives to,1738.2,4.74
chat GPT and ask it about religion it,1739.82,5.4
will advocate for tolerance and creating,1742.94,4.26
space for people to explore religion on,1745.22,4.26
their own and if you further unpack that,1747.2,6.24
uh chat GPT and going back to gpt3 we'll,1749.48,6.72
say that things like individual autonomy,1753.44,4.44
is actually really important for,1756.2,3.42
Humanity to thrive,1757.88,4.44
uh they're scalable the here's the,1759.62,5.039
imperatives it used to just be very,1762.32,4.92
simply reduce suffering increase,1764.659,4.02
prosperity and increase understanding,1767.24,3.24
but I established the scope of in the,1768.679,2.821
universe,1770.48,2.76
because that preemptively answers a lot,1771.5,3.24
of questions,1773.24,3.299
um because it's not just a matter of,1774.74,4.02
okay let's just look at Earth or let's,1776.539,4.081
just look at one nation let's consider,1778.76,3.84
the entire universe so that is the scope,1780.62,3.779
of the imperatives so it's not just,1782.6,4.14
Globe Global it is universal,1784.399,4.681
and then finally uh the heuristic,1786.74,5.34
imperatives encourage transparency uh,1789.08,5.94
because it they incentivize open,1792.08,6.12
communication trust and autonomy but,1795.02,5.399
above and beyond that uh they're,1798.2,4.62
transparent in that if everyone abides,1800.419,3.841
by them everyone knows that everyone is,1802.82,3.3
playing by the same rules now that being,1804.26,4.019
said in the previous video I did address,1806.12,4.62
the Byzantine generals problem which is,1808.279,4.38
that you might have agents in the system,1810.74,4.5
that are either defective faulty or,1812.659,5.281
malicious and this is also addressed by,1815.24,4.86
the heuristic imperatives because what,1817.94,4.56
you will do is you will detect when an,1820.1,4.199
agent is not playing by the rules and,1822.5,3.179
you will track that and we'll talk about,1824.299,3.301
that in just a moment,1825.679,4.98
so the positive Nash equilibrium that,1827.6,6.299
the heuristic imperatives encourage have,1830.659,5.4
four basic criteria that I was able to,1833.899,4.441
think of one is mutual benefits it is,1836.059,4.74
mutually beneficial if all agents in the,1838.34,4.76
system or all participants in the system,1840.799,5.461
adhere to the heuristic imperatives,1843.1,5.62
meaning that the rising tide lifts all,1846.26,5.519
boats if we all work to reduce suffering,1848.72,4.679
if we all work to increase prosperity,1851.779,3.061
and we all work to increase,1853.399,4.441
understanding then we all benefit,1854.84,5.339
um and we get compounding returns trust,1857.84,3.959
and reputation,1860.179,3.181
um so having shared goals and,1861.799,4.74
transparency is a natural result of the,1863.36,4.62
heuristic imperatives as I just,1866.539,2.24
mentioned,1867.98,3.36
resilience and cooperation so this is,1868.779,5.561
this is an interesting outcome which is,1871.34,6.0
that for a for an equilibrium to be,1874.34,5.819
reached it has to be stable and so the,1877.34,4.26
heroes to comparatives create a,1880.159,3.961
resilient system in which,1881.6,4.679
um there's going to be mutual policing,1884.12,5.159
as well as uh some self-correcting,1886.279,5.28
behaviors which will unpack more in a in,1889.279,5.28
a slider too but it is resilient because,1891.559,6.48
it increa encourages collaboration and,1894.559,6.0
cooperation as well as self-regulation,1898.039,5.401
and policing and then finally ultimately,1900.559,4.801
long-term stability that is the entire,1903.44,4.68
point of a national equilibrium and the,1905.36,5.819
and a a desirable attractor state is one,1908.12,5.82
that is stable you don't want chaos or,1911.179,5.841
instability in the future,1913.94,3.08
so,1917.679,3.761
one thing that is becoming apparent,1919.399,3.78
especially as I watch the landscape,1921.44,4.44
change if you look at Auto GPT,1923.179,4.5
all kinds of people are going to be,1925.88,3.899
building their own autonomous systems,1927.679,3.901
and so what you're what we're creating,1929.779,6.181
is a decentralized AGI ecosystem,1931.58,7.5
and so when this happens when everyone,1935.96,5.16
can create an AGI with their own goals,1939.08,3.3
with their own imperatives with their,1941.12,3.299
own design and their own flaws we're,1942.38,3.36
going to end up with a really really,1944.419,4.441
kind of wild west dystopian you know,1945.74,5.939
chaotic world so,1948.86,4.86
one way to mitigate this,1951.679,3.901
decentralization drift is to adhere to,1953.72,4.439
the imperative uh sorry heuristic,1955.58,5.339
imperatives and as I mentioned there is,1958.159,5.341
there's Cooperative benefits right if,1960.919,4.561
you and everyone else you know working,1963.5,4.5
on autonomous AIS agrees on nothing else,1965.48,4.559
except the heuristic imperatives you'll,1968.0,4.32
have that framework in common and a lot,1970.039,4.14
of work will flow from that so the,1972.32,3.42
Cooperative benefits and this is this,1974.179,3.0
goes between,1975.74,3.179
um above and beyond individuals this,1977.179,3.541
includes corporations governments as,1978.919,3.38
well as militaries,1980.72,4.8
number two is Agi policing and,1982.299,6.401
self-regulation so if you have millions,1985.52,5.639
of agis that all agree on the heuristic,1988.7,3.78
imperatives even if they don't agree on,1991.159,3.421
anything else they will police each,1992.48,4.62
other to say hey we're gonna we're gonna,1994.58,6.24
look out for other agis Rogue agis that,1997.1,4.98
do not abide by the heuristic,2000.82,3.06
imperatives and we will we will,2002.08,4.079
collaborate to shut them down,2003.88,5.639
and then finally in this is uh in many,2006.159,5.341
experiments that I've done the heuristic,2009.519,4.861
imperatives result in self-regulation,2011.5,5.22
um within the AGI for instance one of,2014.38,4.5
the things that we're afraid of is once,2016.72,3.9
AGI has become so powerful that they can,2018.88,4.5
reprogram themselves or spawn copies or,2020.62,4.679
reprogram each other or otherwise get,2023.38,3.659
control of their source code that,2025.299,3.061
they're going to change their,2027.039,3.061
fundamental programming,2028.36,3.96
if you make the assumption that an AGI,2030.1,4.02
can change its fundamental programming,2032.32,3.599
spin up alternative copies of itself,2034.12,3.72
then you completely have lost control,2035.919,4.38
however in my experiments with the,2037.84,5.219
heuristic imperatives agis will shy away,2040.299,5.161
from creating copies of themselves or,2043.059,4.381
even modifying their own core,2045.46,4.439
programming out of fear of violating the,2047.44,4.679
heuristic imperatives and so between,2049.899,4.381
policing each other and self-regulation,2052.119,4.56
the heroist comparatives create a very,2054.28,5.42
powerful self-correcting environment,2056.679,5.881
reputation management number three is,2059.7,6.1
another thing where as I mentioned the,2062.56,6.119
agis will work to infer the objectives,2065.8,5.039
of all other agis whether or not it's,2068.679,3.901
known this goes back to the Byzantine,2070.839,4.201
generals problem where you don't know,2072.58,5.819
how another AGI is programmed they might,2075.04,4.859
have used the heuristic imperatives but,2078.399,2.7
they might have been improperly,2079.899,5.341
implemented or you know they're they uh,2081.099,6.121
you might have Rogue elements that are,2085.24,3.54
created without the heuristic,2087.22,3.24
imperatives or other objectives that are,2088.78,3.48
more destructive and then finally,2090.46,3.3
stakeholder pressure,2092.26,3.599
between the four categories of,2093.76,3.96
stakeholders that I already,2095.859,3.421
um Illustrated which is individuals,2097.72,3.2
corporations governments and militaries,2099.28,4.44
agis are going to be another stakeholder,2100.92,4.54
now whether or not you believe that,2103.72,3.3
they're conscious or sentient or have,2105.46,3.54
rights I don't really think that's,2107.02,3.54
relevant because they will be powerful,2109.0,3.78
entities in and of themselves before too,2110.56,5.039
long and so between those five types of,2112.78,4.92
stakeholders there will be there will be,2115.599,5.821
intra and Inter group pressure to,2117.7,5.7
conform and adhere to the heuristic,2121.42,3.84
imperatives,2123.4,4.26
okay so let's describe,2125.26,4.62
assuming all this works out and assuming,2127.66,3.78
that I'm right and assuming that I'm not,2129.88,4.14
crazy and that the trends continue and,2131.44,4.2
people are going to keep building the,2134.02,4.26
agis that they're working on what,2135.64,5.04
characteristics can we use to describe,2138.28,6.48
this desirable attractor state or Utopia,2140.68,6.06
so one is Universal Health and,2144.76,3.24
well-being,2146.74,3.24
with a few exceptions of people that are,2148.0,4.14
stuck in self-destructive patterns all,2149.98,3.96
humans want health and well and wellness,2152.14,4.8
that's pretty much a given,2153.94,6.72
number two again with with a few,2156.94,6.06
outliers,2160.66,4.86
um people want environmental restoration,2163.0,5.16
and sustainability,2165.52,4.98
um number three individual liberty and,2168.16,4.98
personal autonomy this is an intrinsic,2170.5,5.16
psychological need for all humans,2173.14,3.84
um number four knowledge and,2175.66,2.76
understanding,2176.98,3.0
um curiosity and learning are,2178.42,3.9
universally beneficial which is why,2179.98,5.46
education is one of the uh primary goals,2182.32,7.08
of of uh Nations and and unions of,2185.44,6.419
Nations such as the United Nations uh,2189.4,4.62
European union and so on and then,2191.859,4.921
finally peaceful coexistence uh nobody,2194.02,4.68
wants war and Chaos some people think,2196.78,3.96
it's cool you know watching Lex Friedman,2198.7,3.3
talk to various people they're like oh,2200.74,3.0
yeah there is something attractive about,2202.0,4.44
thinking about catastrophe and cataclysm,2203.74,4.379
we keep making disaster movies for,2206.44,3.84
instance but in terms of how we actually,2208.119,4.201
want to live we all want peaceful,2210.28,3.299
coexistence,2212.32,4.2
and so this desirable attractor State a,2213.579,5.881
shorthand is Utopia,2216.52,5.46
now I know I've painted a very Rosy,2219.46,5.04
picture as well as um you know presented,2221.98,6.42
some challenges so there are still a few,2224.5,6.599
challenges uh remaining that we need to,2228.4,3.84
address,2231.099,3.0
um and so one of those is misalignment,2232.24,4.74
and drift so even with the heuristic,2234.099,4.921
imperatives there might still be drift,2236.98,4.56
or misalignment intentionally or,2239.02,4.44
otherwise it could be that there's flaws,2241.54,5.039
in the implementation the code or maybe,2243.46,4.98
someone breaks them or says hey I'm,2246.579,3.181
going to do an experiment by deleting,2248.44,3.3
one of the heuristic imperatives that,2249.76,3.96
could destabilize the system,2251.74,4.2
second there can be unintended,2253.72,5.399
consequences so one thing that it seems,2255.94,4.98
like it will inevitably happen is that,2259.119,4.561
AGI systems are going to outstrip and,2260.92,4.679
outpace human intellect,2263.68,4.26
if that if that becomes the case and,2265.599,4.02
they might also adopt other languages,2267.94,4.2
right right now most of them communicate,2269.619,5.041
in English because English is the is the,2272.14,4.86
bulk of the training data but you know,2274.66,5.34
for instance what if the agis uh,2277.0,4.68
ultimately communicate with a language,2280.0,3.599
that we cannot comprehend or understand,2281.68,5.3
like binary or vectors or something else,2283.599,6.121
and then we can't even monitor what,2286.98,5.56
they're doing what my hope is that the,2289.72,5.58
agis as part of being trustworthy and,2292.54,5.1
transparent will choose to continue to,2295.3,5.279
communicate exclusively in English,2297.64,6.24
but that we we can't assume that that,2300.579,4.981
will be true,2303.88,3.68
um number three concentration of power,2305.56,6.299
now I did talk about how I believe that,2307.56,6.34
there were there are the the heuristic,2311.859,3.72
imperatives will create an incentive,2313.9,4.32
structure that results in you know,2315.579,5.04
sharing a power transparency so on and,2318.22,4.56
so forth that being said there is still,2320.619,4.441
a tremendous amount of desire to,2322.78,4.799
concentrate power and especially on the,2325.06,4.44
geopolitical stage,2327.579,4.081
um there are nations out there with,2329.5,5.22
mutually exclusive goals and as long as,2331.66,5.04
Nations exist with mutually exclusive,2334.72,4.32
goals or incompatible visions of how the,2336.7,4.32
planet should be there will be,2339.04,4.079
concentrations of power and those,2341.02,4.14
concentrations of power will be pitted,2343.119,3.901
against each other so that is not,2345.16,3.12
something that the heuristic imperatives,2347.02,3.3
intrinsically address but that is a,2348.28,5.22
reality of what exists today which can,2350.32,5.7
destabilize the system so in the long,2353.5,3.839
run,2356.02,3.839
I think part of the ideal State the Nash,2357.339,4.5
equilibrium is that power is not,2359.859,4.321
concentrated anywhere but we need to,2361.839,4.621
overcome several major barriers as a,2364.18,4.2
species before we can achieve that,2366.46,4.86
number four is social resistance public,2368.38,5.699
skepticism mistrust and ignorance is one,2371.32,4.259
of the greatest enemies right now which,2374.079,3.421
is why I am doing this work which is why,2375.579,4.681
I chose YouTube as my primary platform,2377.5,5.579
to disseminate my information,2380.26,6.06
number five malicious use again as long,2383.079,5.76
as there are malicious actors there,2386.32,3.66
might be,2388.839,3.301
um deliberate deployments of AGI that,2389.98,4.56
are harmful which could destabilize the,2392.14,4.14
system,2394.54,3.72
and finally I do need to address this as,2396.28,3.96
well the heuristic imperatives are a,2398.26,4.44
necessary Foundation of this utopic,2400.24,5.879
outcome this this uh beneficient uh,2402.7,5.52
beneficial sorry a tractor state that,2406.119,3.841
we're looking for but they do not,2408.22,4.26
represent a complete solution there are,2409.96,4.08
a few other things that are needed in,2412.48,3.32
order to achieve this outcome one,2414.04,4.559
collaboration and open dialogue so,2415.8,4.779
research is individuals corporations and,2418.599,3.901
governments all need to work together at,2420.579,5.04
a global scale anything short of global,2422.5,6.68
collaboration and cooperation is,2425.619,6.361
could very well result in a negative,2429.18,4.3
outcome and this is one of the things,2431.98,3.3
that um Liv and other people talk about,2433.48,4.32
when talking about Malik is that it is a,2435.28,3.839
uh what do they call it I think a,2437.8,3.18
collaboration failure or a signal,2439.119,3.601
failure I can't remember exactly how,2440.98,3.42
they describe it but essentially,2442.72,4.139
collaboration is the antidote and open,2444.4,4.92
dialogue is the antidote to the,2446.859,5.22
ignorance and other negative signals and,2449.32,4.62
noise that contribute to the Malik,2452.079,3.061
problem,2453.94,3.36
number two is regulatory Frameworks and,2455.14,5.16
oversight again it's not just a matter,2457.3,6.02
of coming together it is that there are,2460.3,5.34
institutional changes that need to,2463.32,4.6
happen such as legislation,2465.64,5.4
um councils and and Summits and other,2467.92,6.72
kinds of meetings and and Investments,2471.04,5.22
that need to happen at an Institutional,2474.64,4.32
level not just communication and and,2476.26,5.099
dialogue but the Frameworks the,2478.96,4.619
oversights those also need to be,2481.359,4.561
implemented at number three education,2483.579,4.801
and awareness as I just mentioned public,2485.92,4.679
awareness and and understanding is,2488.38,5.52
presently insufficient to overcome the,2490.599,4.681
negative attractor states that we're,2493.9,3.3
heading towards and number four,2495.28,4.62
continuous monitoring and Improvement,2497.2,4.32
um this is not a solution that we solve,2499.9,3.959
once it is an ongoing thing just like,2501.52,3.72
how you don't just pass internet,2503.859,3.121
regulations and then you're done you go,2505.24,3.96
home forever you continuously monitor,2506.98,4.859
the changing Dynamic environment so that,2509.2,5.46
you can course correct as you go that is,2511.839,4.381
going to be necessary necessary forever,2514.66,4.5
with AGI it's not going to go away just,2516.22,4.92
like you know the EPA the Environmental,2519.16,3.959
Protection Agency didn't just you know,2521.14,4.56
create a set of guidelines and you know,2523.119,5.361
we're done they pack it up no the EPA,2525.7,5.04
continuously does stress tests and,2528.48,4.48
pressure tests and measurements all over,2530.74,4.14
the nation to make sure that the,2532.96,3.6
policies are effective and then of,2534.88,2.64
course as they gain more information,2536.56,3.48
those policies change we will need the,2537.52,5.28
same kind of vigilance applied to AGI,2540.04,6.62
systems and the AGI ecosystem,2542.8,3.86
so that was a lot thank you for watching,2547.42,5.28
um that's about all I have today uh not,2550.359,4.561
that this was not much but thanks,2552.7,5.159
anyways and um yeah I hope that this,2554.92,4.8
helped and I hope that it gives you a,2557.859,3.301
little bit more confidence in the,2559.72,4.82
direction that we're going thanks,2561.16,3.38