davidshapiro_youtube_transcripts / AGI within 18 months explained with a boatload of papers and projects_transcript.csv
Stevross's picture
Upload 50 files
421fea8
raw
history blame contribute delete
No virus
30.1 kB
text,start,duration
hey everyone David Shapiro here with an,0.0,5.1
update sorry it's been a while,3.06,3.9
um I am doing much better thank you for,5.1,4.8
asking and thanks for all the kind words,6.96,5.7
um yeah so a couple days ago I posted a,9.9,4.859
video where I said like,12.66,4.619
we're gonna have AGI within 18 months,14.759,5.461
and that caused a stir on in some,17.279,4.981
corners of the internet,20.22,3.96
um but I wanted to share like why I,22.26,3.96
believe that because maybe not everyone,24.18,3.72
has seen the same information that I,26.22,3.12
have so first,27.9,5.819
Morgan Stanley research on Nvidia,29.34,6.899
um this was really big on Reddit and,33.719,4.741
basically why we are writing this we,36.239,4.081
have seen several reports that in our,38.46,3.779
view incorrectly characterize the direct,40.32,4.2
opportunity for NVIDIA in particular the,42.239,5.101
revenue from chat GPT inference,44.52,5.879
we think that gpt5 is currently being,47.34,4.5
trained on 25,50.399,5.581
000 gpus or 225 million dollars or so of,51.84,6.42
Nvidia hardware and the inference costs,55.98,3.96
are likely much lower than some of the,58.26,3.6
numbers we have seen further reduce,59.94,3.419
reducing inference costs will be,61.86,3.119
critical in resolving the cost of search,63.359,4.921
debate from cloud Titans so basically,64.979,6.301
if chat GPT becomes much much cheaper,68.28,4.92
then it's actually going to be cheaper,71.28,3.659
than search,73.2,3.36
um is is kind of how I'm interpreting,74.939,4.201
that now this paper goes on to say that,76.56,5.46
like the industry is pivoting so rather,79.14,5.4
than seeing this as a trendy new fad or,82.02,4.739
a shiny new toy they're saying No this,84.54,3.24
actually has serious business,86.759,3.301
implications which people like I have,87.78,5.1
been saying for years but you know the,90.06,4.26
industry is catching up especially when,92.88,3.66
you see like how much revenue Google,94.32,4.38
lost just with the introduction of chat,96.54,3.96
GPT,98.7,2.52
um,100.5,3.06
I like this and we're not trying to be,101.22,4.92
curmudgeons on the opportunity,103.56,5.34
so anyways Morgan Stanley Nvidia and,106.14,5.82
I've been I've been uh on in nvidia's,108.9,5.219
corner for a while saying that like I,111.96,3.479
think they're the underdog they're the,114.119,3.901
unsung hero here so anyways you look at,115.439,4.921
the investment and so this reminds me of,118.02,5.639
the ramp up for solar so 10 to 15 years,120.36,5.88
ago all the debates were like oh solar's,123.659,4.8
not efficient solar isn't helpful it's,126.24,4.92
too expensive blah blah blah and then,128.459,4.86
once you see the business investment,131.16,3.9
going up that's when you know you're at,133.319,4.441
the inflection point so AI is no longer,135.06,4.38
just a bunch of us you know writing,137.76,4.86
papers and tinkering when you see the,139.44,6.0
millions and in this case a quarter of a,142.62,4.32
billion dollars,145.44,4.32
being invested that's when you know that,146.94,4.68
things are changing and so this reminds,149.76,5.64
me of like the 2013 to 2015 uh range,151.62,5.82
maybe actually even like 2017 range for,155.4,3.36
solar where it's like actually no it,157.44,3.18
makes Financial sense,158.76,3.479
um but of course everything with AI is,160.62,4.979
exponentially faster uh so,162.239,6.241
Nvidia is participating they've got the,165.599,4.681
hardware they're building out the big,168.48,4.38
computers so on and so forth the,170.28,4.2
investment is there so the Improvement,172.86,3.42
is coming the exponential ramp up is,174.48,3.0
coming now,176.28,4.319
that's great uh one tool let's take a,177.48,6.06
quick break and um when I when I talked,180.599,6.241
about uh n8n in Nathan or naden I'm not,183.54,5.22
sure how people pronounce it as well as,186.84,3.899
Lang chain people were quick to point,188.76,4.199
out Lang flow which is a graphical,190.739,6.601
interface for Lang chain so this is this,192.959,6.42
fills in a really big gap for Lang chain,197.34,4.86
which is okay how do you see it how are,199.379,4.741
things cross-linked so I wanted to share,202.2,4.88
this this tool it's a github.com,204.12,7.38
logspace dash AI uh Slash Lang flow so,207.08,5.799
you can just look up Lang flow and,211.5,3.959
you'll find it so this is a good uh good,212.879,5.101
chaining tool a nice graphical interface,215.459,4.081
this is exactly the direction that,217.98,3.24
things are going,219.54,4.5
um great Okay so we've got the business,221.22,4.56
investment we've got people creating,224.04,4.199
open source libraries it's going it's,225.78,4.62
advancing so I wanted to share this,228.239,5.64
paper with you uh mm react for uh was it,230.4,6.78
multimodal reasoning and action so this,233.879,6.301
basically makes use of the latest GPT,237.18,6.36
where you've got vision and chat,240.18,6.24
um and it's like it's kind of it's,243.54,5.52
exactly what you what you kind of expect,246.42,4.019
um but this page does a good job of,249.06,3.179
giving you a bunch of different,250.439,3.961
um examples and they're uh I think,252.239,4.861
they're pre-recorded is it playing,254.4,5.04
it looks okay there it goes,257.1,4.74
um so you can check out this paper the,259.44,4.5
full paper is here and there's a live,261.84,4.62
demo up on hugging face so you can try,263.94,5.039
different stuff and then talk about it,266.46,5.58
um which is great like the fact that,268.979,5.401
they're able to share this for free just,272.04,5.159
as a demonstration is just a hint as to,274.38,4.44
what's coming,277.199,3.78
um because imagine when this is,278.82,4.08
commoditized you can do it on your phone,280.979,4.021
right your phone's Hardware will be,282.9,3.299
powerful enough to run some of these,285.0,3.78
models within a few years certainly if,286.199,4.5
it's uh if it's offloaded to the cloud,288.78,4.56
it's powerful enough to do it now,290.699,5.821
um and then uh so,293.34,5.46
you when you stitch together the the,296.52,4.5
rapidly decreasing cost of inference,298.8,3.899
these things are basically going to be,301.02,4.14
free to use pretty soon when you look at,302.699,4.681
the fact that an open source framework,305.16,5.759
like Lang flow and and uh and so on can,307.38,5.22
allow pretty much anyone to create,310.919,3.661
cognitive workflows,312.6,4.56
and all these things it's like okay yeah,314.58,5.16
like we're gonna have really powerful,317.16,3.84
machines soon,319.74,3.179
and so someone asked for clarification,321.0,4.139
when I said okay well what do you mean,322.919,5.041
when you say AGI within 18 months,325.139,4.261
because nobody can agree on the,327.96,3.72
definition and if you watched the Sam,329.4,5.16
Altman Lex Friedman interview he ref he,331.68,4.739
refers to Sam Allman refers to AGI,334.56,3.479
several times but the definition seems,336.419,3.481
to change because early in the interview,338.039,3.781
he talks about like oh you know you put,339.9,4.139
someone in front of gpt4 or chat gpt4,341.82,3.9
and what's the first thing that they do,344.039,4.321
when when and these are his words when,345.72,4.979
they interact with an AGI is they try,348.36,4.08
and break it or tease it or whatever and,350.699,4.44
then later he says oh well gpt5 that's,352.44,4.56
not even going to be AGI so he keeps,355.139,3.481
like equivocating and bouncing back and,357.0,3.419
forth,358.62,4.38
I think that part of what's going on,360.419,5.041
here is there's no good definition and,363.0,4.259
because later in the conversation they,365.46,3.6
were talking about things that a chat,367.259,4.801
model can do it's not autonomous right,369.06,4.5
um but,372.06,4.8
I'm glad you asked reflection came out,373.56,5.579
an autonomous agent with dynamic memory,376.86,4.559
and self-reflection,379.139,4.021
um so between,381.419,5.581
cognitive workflows and autonomy and the,383.16,5.46
investment coming up in into these,387.0,3.0
models,388.62,4.199
we are far closer to fully autonomous,390.0,5.6
agents than I think many people,392.819,5.341
recognize so the reflection stuff I'm,395.6,3.58
not going to do a full video on,398.16,3.0
reflection there's there's other um ones,399.18,4.26
out there but basically this outperforms,401.16,5.4
humans in in a few tasks and it forms a,403.44,5.34
very very basic kind of cognitive,406.56,3.84
architecture Loop,408.78,3.78
so query action environment reward,410.4,4.98
reflect and then repeat so you just,412.56,4.68
continuously iterate on something in a,415.38,4.86
loop and there you go uh and also for,417.24,4.86
people who keep asking me what I think,420.24,2.64
about,422.1,2.76
um uh what's his name Ben gertzel I'm,422.88,3.24
not sure if I'm saying his name right,424.86,3.72
but I read his seminal paper a couple,426.12,4.139
years ago on general theory on general,428.58,4.32
intelligence and he never mentioned,430.259,5.041
iteration or Loops at least not to the,432.9,3.66
degree that you need to when you're,435.3,3.959
talking about actual intelligence so I,436.56,4.859
personally don't think that he's done,439.259,5.28
anything particularly relevant today I'm,441.419,4.5
not going to comment on his older work,444.539,3.0
because obviously like he's made a name,445.919,3.661
for himself so on and so forth but I,447.539,3.741
don't think that Ben has done anything,449.58,3.839
really pertinent to cognitive,451.28,3.94
architecture which is the direction that,453.419,3.84
things are going,455.22,5.16
um but yeah so when when MIT is doing,457.259,5.581
research on cognitive architecture and,460.38,6.18
autonomous designs when Morgan Stanley,462.84,6.62
and Nvidia are working on investing,466.56,4.919
literally hundreds of millions of,469.46,4.6
dollars to drive down inference cost and,471.479,5.701
when open source uh libraries are,474.06,4.44
creating,477.18,2.639
um the rudiments of cognitive,478.5,4.199
architectures we are ramping up fast and,479.819,5.521
so someone asked what I meant again kind,482.699,3.961
of getting back to that what did I mean,485.34,4.44
by AGI within 18 months I said in 18,486.66,4.379
months,489.78,4.259
any possible definition of AGI that you,491.039,5.341
have will be satisfied,494.039,3.78
um so it's like I don't care what your,496.38,3.96
definition of AGI is unless like there's,497.819,3.841
still some people out there that like,500.34,3.12
you ask them and it's like oh well once,501.66,3.9
AGI hits like the skies will darken and,503.46,3.72
nuclear weapons will rain down and I'm,505.56,4.74
like that's not AGI that's Ultron that's,507.18,5.7
different that's that's a fantasy,510.3,3.9
um that's probably not going to happen,512.88,3.24
it could if skynet's going to happen it,514.2,4.079
will happen within 18 months,516.12,3.9
um but I don't think it's going to,518.279,4.2
happen Okay so that's section one of the,520.02,4.5
video talking about the news and,522.479,4.501
everything out there so now let me pivot,524.52,4.319
and talk about the work that I've been,526.98,3.0
doing,528.839,3.421
um so I've been making extensive use of,529.98,6.18
chat gbt4 to accelerate my own research,532.26,5.639
um I've been working on a few things,536.16,3.78
many of you are going to be familiar,537.899,3.481
with my work on the heuristic,539.94,3.72
imperatives which is how do you create a,541.38,4.74
fully autonomous machine that is safe,543.66,5.94
and stable ideally for all of eternity,546.12,6.36
um so this is this is is this is,549.6,4.62
probably one of my most important pieces,552.48,3.24
of work and I've put it into all of my,554.22,4.38
books and a lot of other stuff the tldr,555.72,5.48
of heuristic imperatives is it's like,558.6,6.6
it's similar to asimov's three laws of,561.2,5.56
robotics but it is much much more,565.2,3.9
broadly Genera generalized and it is,566.76,4.199
also not um androcentric or,569.1,3.54
anthropocentric,570.959,4.5
and so basically the three rules that if,572.64,5.16
you embed them into your your autonomous,575.459,4.861
AI systems uh reduce suffering in the,577.8,4.02
universe increase prosperity in the,580.32,2.82
universe and increase understanding in,581.82,3.42
the universe this creates a very,583.14,4.139
thoughtful machine and it serves as a,585.24,3.36
really good,587.279,3.201
um reinforcement learning mechanism,588.6,5.04
self-evaluation mechanism that results,590.48,6.52
in a very thoughtful uh machine so that,593.64,6.66
information is all available out here,597.0,6.24
um under uh on my GitHub Dave shop here,600.3,5.159
is to comparatives I've got it published,603.24,4.68
as a word doc and a PDF,605.459,4.5
so I started adopting a more scientific,607.92,3.419
approach,609.959,3.06
um because well there's a reason that,611.339,4.56
the scientific paper format works so if,613.019,4.981
you want to come out here and read it,615.899,3.901
um it's out there it's totally free of,618.0,2.82
course,619.8,2.4
um oh actually that reminds me I need to,620.82,3.78
put a way to cite my work because you,622.2,5.819
can cite GitHub repos but basically this,624.6,6.12
provides uh quite a quite a bit and one,628.019,5.101
thing it to point out is that this paper,630.72,4.38
was almost written entirely word for,633.12,5.52
word by chat gpt4 meaning that all of,635.1,6.54
the reasoning that it does was performed,638.64,7.8
by chat gpt4 and at the very end,641.64,7.199
um I actually had it reflect on its own,646.44,4.88
performance,648.839,2.481
um it looks like it's not going to load,651.48,3.419
that much uh more pages oh there we go,652.8,5.76
examples so anyways uh when you read,654.899,5.641
this and you keep in mind that the the,658.56,4.56
Nuance of it whoops that the Nuance of,660.54,4.919
this was uh,663.12,5.94
within within the capacity of chat gpd4,665.459,5.281
you will see that these models are,669.06,4.399
already capable of very very nuanced,670.74,5.7
empathetic and moral reasoning and this,673.459,3.94
is one thing that a lot of people,676.44,2.519
complain about they're like oh well it,677.399,3.961
doesn't truly understand anything I,678.959,3.661
always say that humans don't truly,681.36,3.06
understand anything so that's a,682.62,4.38
frivolous argument but,684.42,4.08
um that leads to another area of,687.0,3.66
research which I'll get into in a minute,688.5,4.74
uh but basically keep in mind how,690.66,4.98
nuanced this paper is and keep in mind,693.24,4.74
that chat GPT wrote pretty much the,695.64,4.02
entire thing and I've also got the,697.98,3.359
transcript of the conversation at the,699.66,3.359
end so if you want to if you want to,701.339,3.541
read the whole transcript please feel,703.019,3.241
free to read the whole transcript and,704.88,2.94
you can see,706.26,3.48
um where like we worked through the the,707.82,3.66
whole paper,709.74,4.92
um yeah so that's it so on the topic of,711.48,5.46
uh does the machine truly understand,714.66,3.799
anything,716.94,6.6
that resulted in this transcript which I,718.459,7.781
have yet to format this into a full,723.54,6.479
um uh scientific paper but basically the,726.24,6.36
the tldr here is that I call it the,730.019,5.521
epistemic pragmatic orthogonality which,732.6,5.58
is that the epistemic truth of whether,735.54,4.32
or not a machine truly understands,738.18,5.159
anything is orthogonal or uncorrelated,739.86,6.84
with how useful it is or objectively,743.339,6.141
um correct it is right so if you look,746.7,5.759
basically it doesn't matter if the,749.48,4.66
machine truly understands anything,752.459,4.681
because again that's not really germane,754.14,6.18
to its function as a machine and so this,757.14,5.879
is uh it's a fancy term but it basically,760.32,5.1
says okay and there's there was actually,763.019,3.961
a great Reddit post where it's like can,765.42,3.3
we stop arguing over whether or not it's,766.98,3.72
sentient or conscious or understands,768.72,4.5
anything that doesn't matter,770.7,5.04
um what matters is it's its physical,773.22,5.7
objective measurable impact and it,775.74,5.36
whether it is objectively or measurably,778.92,4.26
correct or useful,781.1,3.94
so I call that the epistemic pragmatic,783.18,4.08
orthogonality principle of artificial,785.04,4.799
intelligence I've got it summarized here,787.26,4.74
so you can just read this is the,789.839,3.721
executive summary,792.0,3.48
um that I actually use Chad gbt to write,793.56,3.54
so again a lot of the work that I'm,795.48,5.28
doing is anchored by chat GPT and the,797.1,5.7
fact that chat GPT was able to have a,800.76,5.34
very nuanced conversation about its own,802.8,4.56
understanding,806.1,3.12
kind of tells you how smart these,807.36,3.479
machines are,809.22,4.679
um yep so that is that paper now moving,810.839,4.981
on back to uh some of the cognitive,813.899,3.901
architecture stuff,815.82,3.06
um one thing that I'm working on is,817.8,3.599
called Remo so the rolling episodic,818.88,5.22
memory organizer for autonomous AI,821.399,4.981
systems I initially called this hmcs,824.1,3.72
which is hierarchical memory,826.38,3.54
consolidation system but that's a,827.82,5.699
mouthful and it doesn't abide by the uh,829.92,5.64
the current Trend where you use an,833.519,4.56
acronym that's easy to say right so Remo,835.56,4.44
rolling episodic memory organizer much,838.079,3.921
easier to say much easier to remember,840.0,6.72
basically what this does is uh it's also,842.0,7.18
not done so I need to add a caveat there,846.72,4.32
I'm working through it here with chat,849.18,4.32
gpt4 where we're working on defining the,851.04,4.739
problem writing the code so on and so,853.5,4.62
forth but basically what this does is,855.779,4.261
rather than just using semantic search,858.12,5.399
because uh a lot of folks have realized,860.04,5.88
that yes semantic search is really great,863.519,4.801
because it allows you to search based on,865.92,3.96
semantic similarity rather than just,868.32,4.92
keywords super powerful super fast uh,869.88,5.639
using stuff like Pinecone still not good,873.24,5.52
enough because it is not organized in,875.519,5.88
the same way that a human memory is so,878.76,5.939
Remo the entire point of Remo,881.399,6.601
is to do two things,884.699,5.221
um the two primary goals is to maintain,888.0,5.279
salience and coherence so Salient,889.92,5.64
memories means that uh what you're,893.279,4.201
looking at is actually Germaine actually,895.56,3.719
relevant to the conversation that you're,897.48,4.26
having which can be more difficult if,899.279,4.56
you just use semantic search the other,901.74,5.339
thing is coherence which is keeping the,903.839,5.581
context of those memories,907.079,5.041
um basically in a coherent narrative so,909.42,5.219
if rather than just focusing on semantic,912.12,5.219
search the two terms that I'm,914.639,4.861
introducing are salience and coherence,917.339,5.281
and of course this is rooted in temporal,919.5,5.82
binding so human memories are temporal,922.62,5.339
and associative so those four Concepts,925.32,5.94
salience and coherence are achieved with,927.959,5.721
temporal and associative or semantic,931.26,6.12
consolidation and so what I mean by uh,933.68,6.099
temporal consolidation is you take,937.38,5.1
clusters of memories that are temporally,939.779,5.401
bounded or temporally nearby and you,942.48,5.28
summarize those so that gives you that,945.18,4.68
gives you temporal consolidation which,947.76,3.96
allows you to take you can compress,949.86,4.44
memories AI memories you know on a,951.72,6.0
factor of five to one uh 10 to 1 20 to 1,954.3,5.76
depending on how concisely you summarize,957.72,4.739
them so that gives you a lot of,960.06,6.18
consolidation then you use a semantic,962.459,6.661
modeling to create a semantic web or a,966.24,5.219
cluster uh um from the semantic,969.12,4.32
embeddings of those summaries,971.459,6.961
so it's a layered process actually,973.44,4.98
here I think I can just show you here,978.72,4.619
um,982.019,2.94
wait no I've got the paper here let me,983.339,4.081
show you the Remo paper,984.959,4.141
um so this is a work in progress it'll,987.42,3.12
be published soon,989.1,2.7
um but let me show you the diagrams,990.54,2.7
because this will just make it make much,991.8,3.539
more sense oh and chat GPT can make,993.24,4.26
diagrams too you just ask it to Output a,995.339,5.701
mermaid diagram definition and it'll do,997.5,6.779
it so here's here's the tldr the very,1001.04,5.46
simple version of the Remo framework,1004.279,4.321
it's it's got three layers so there's,1006.5,4.259
the raw log layer which is just the chat,1008.6,4.56
logs back and forth the temporal,1010.759,4.5
consolidation layer which as I just,1013.16,5.0
mentioned allows you to compress,1015.259,6.841
memories based on temporal grouping,1018.16,5.38
and then finally the semantic,1022.1,4.739
consolidation layer which allows you to,1023.54,5.639
create and extract topics based on,1026.839,4.86
semantic similarity so by by having,1029.179,5.16
these two these two layers that have,1031.699,4.86
different kinds of consolidation you end,1034.339,4.441
up with what I call temporally invariant,1036.559,6.0
recall so the topics that we that we,1038.78,5.279
extract,1042.559,5.4
um are going to include all the time uh,1044.059,6.561
from beginning to end that is relevant,1047.959,5.34
while also having benefited from,1050.62,5.32
temporal consolidation I'm going to come,1053.299,4.26
up with some better diagrams to to,1055.94,5.4
demonstrate this but basically it's like,1057.559,5.941
actually I can't think of of a good way,1061.34,4.199
to describe it,1063.5,4.32
um but anyway so this paper is coming,1065.539,4.201
um and I'm I'm actively experimenting,1067.82,3.9
with this on a newer version of Raven,1069.74,4.439
that uses a lot more implied cognition,1071.72,5.04
so I talked about implied cognition in a,1074.179,4.321
previous episode but basically implied,1076.76,6.299
cognition is when I in using chat gpt4 I,1078.5,6.36
realize that it is able to think through,1083.059,4.081
stuff without you having to design a,1084.86,3.42
more sophisticated cognitive,1087.14,3.12
architecture so the cognitive,1088.28,4.5
architecture with gpt4 as the cognitive,1090.26,5.1
engine actually becomes much simpler and,1092.78,4.2
you only have to focus your I don't want,1095.36,3.96
to say only but the focus shifts then to,1096.98,4.319
memory because once you have the correct,1099.32,4.14
Memories the the model becomes much more,1101.299,3.181
intelligent,1103.46,2.839
so that's up here under Remo framework,1104.48,5.04
I'm working on a conversation with Raven,1106.299,5.441
to to demonstrate this,1109.52,4.38
um and and that's that the paper will be,1111.74,4.5
coming too so that this is one big,1113.9,4.019
important piece of work the other most,1116.24,2.939
important piece of work that I'm working,1117.919,3.721
on is the atom framework which this,1119.179,4.801
paper is already done,1121.64,4.5
um but,1123.98,4.62
atom framework let me just load it here,1126.14,4.98
there we go so um autonomous task,1128.6,4.319
orchestration manager so this is another,1131.12,4.2
kind of long-term memory for autonomous,1132.919,5.161
AI systems that's basically like the,1135.32,4.739
tldr is,1138.08,4.979
um it's like jira or Trello but for,1140.059,5.761
machines with an API,1143.059,6.301
um and so in this case uh you it's,1145.82,6.239
inspired by a lot of things one agile,1149.36,5.939
two on task by David Bader,1152.059,5.761
um Neuroscience for dummies uh jira,1155.299,4.921
Trello a whole bunch of other stuff,1157.82,5.58
um but basically we talk about cognitive,1160.22,4.56
control so I'm introducing a lot of,1163.4,3.48
Neuroscience terms to the AI community,1164.78,4.44
so cognitive control has to do with task,1166.88,4.32
selection task switching task,1169.22,4.8
decomposition goal tracking goal States,1171.2,4.56
those sorts of things,1174.02,3.659
um and then we talk about,1175.76,3.299
um you know some of the inspiration,1177.679,3.841
agile jira Trello,1179.059,4.681
um and then so it's like okay so what,1181.52,4.019
are the things that we need to talk or,1183.74,4.02
that we need to include in order for an,1185.539,5.701
AI system to be fully autonomous and and,1187.76,6.299
track tasks over time so you need tools,1191.24,4.5
and Tool definitions you need resource,1194.059,3.12
management and you need an agent model,1195.74,3.66
all these are are full are described,1197.179,5.221
later on or in Greater depth,1199.4,6.42
um then actually in my conversation with,1202.4,5.94
um with chat GPT one of the things that,1205.82,4.08
it said is like okay well how do you,1208.34,2.94
prioritize stuff and I was like I'm glad,1209.9,3.18
you asked um and so I shared my work,1211.28,3.42
with the heuristic imperatives and chat,1213.08,3.78
GPT agreed like oh yeah this is a really,1214.7,4.5
great framework for prioritizing tasks,1216.86,4.679
and on and measuring success okay great,1219.2,4.02
let's use that,1221.539,4.681
um I'll I think let's see is the uh,1223.22,4.44
transcript posted I don't know if I,1226.22,3.18
posted the transcript I didn't I'll post,1227.66,3.84
the full strength transcript of of right,1229.4,4.2
making the atom framework,1231.5,3.96
um in the repo,1233.6,4.319
um so then we get into like okay so now,1235.46,4.44
that you have all the background what do,1237.919,4.201
we talk about so it's all about tasks,1239.9,4.08
and the data that goes into the task so,1242.12,3.059
first you need to figure out how to,1243.98,3.3
represent a task so there's basic stuff,1245.179,4.561
like task ID description type goal State,1247.28,5.1
priority dependencies resource time,1249.74,4.799
estimates task status assigned agents,1252.38,5.64
progress and then the one that is,1254.539,6.421
um new is Task impetus so this is,1258.02,4.74
something that you might not,1260.96,3.48
think of if you think about you know,1262.76,3.6
your your jira board or your kanban,1264.44,5.76
board or Trello board is the why so the,1266.36,6.6
why is implicit in our tasks why am I,1270.2,3.96
trying to do this,1272.96,3.42
but when we added this,1274.16,4.2
um Chad GPT got really excited and it's,1276.38,3.9
like oh yeah it's actually really,1278.36,4.26
important to record why any autonomous,1280.28,4.38
entity is doing a task for a number of,1282.62,5.64
reasons one to track priorities or the,1284.66,5.94
the the the impetus might be superseded,1288.26,4.38
later on any number of things but also,1290.6,3.72
you need to justify the use of those,1292.64,4.26
resources in that time so this all goes,1294.32,4.44
into the representation of a task which,1296.9,4.44
you can do in Json yaml flat files,1298.76,4.44
Vector databases whatever I don't care,1301.34,3.36
like you can figure out how you want to,1303.2,3.359
represent it I'm probably just going to,1304.7,3.479
do these in text files honestly because,1306.559,3.841
that's the easiest thing for an llm to,1308.179,3.721
read,1310.4,3.659
um and then so talking about the task,1311.9,3.779
representation then we move on to the,1314.059,3.841
task life cycle task creation,1315.679,4.581
decomposition prioritization execution,1317.9,4.56
monitoring and updating and then finally,1320.26,4.0
completing the task,1322.46,3.3
um and then and then you archive it and,1324.26,2.88
you save it for later so that you can,1325.76,3.6
refer back to it again this is still,1327.14,4.26
primarily a long-term memory system for,1329.36,4.38
autonomous AI systems,1331.4,4.5
um some of the folks that I work with on,1333.74,3.66
Discord,1335.9,3.779
um and by work with I mean just like I'm,1337.4,4.62
in you know the AI communities with them,1339.679,3.36
um they all think that the atom,1342.02,3.42
framework is pretty cool,1343.039,4.38
um so then we talk about task Corpus,1345.44,4.26
management which is like okay looking at,1347.419,3.961
an individual task is fine but how do,1349.7,3.359
you look at your entire body of tasks,1351.38,4.02
because in autonomous AI it might have,1353.059,4.081
five tasks it might have five thousand,1355.4,4.08
tasks and then you see you need some,1357.14,5.399
processes to like okay if we're going,1359.48,4.92
through these tasks how do we manage a,1362.539,3.841
huge volume of tasks and so some ideas,1364.4,4.259
about how to do that are here,1366.38,4.14
um and then finally uh one of the last,1368.659,3.241
sections is some implementation,1370.52,4.08
guidelines which is just okay this is,1371.9,4.2
this is probably some things that you,1374.6,4.02
want to think about when you deploy uh,1376.1,4.079
your implementation of the atom,1378.62,3.24
framework,1380.179,3.901
um yeah so I think that's about it I I'm,1381.86,3.72
obviously I'm always working on a few,1384.08,3.719
different things but the atom framework,1385.58,4.62
and the Remo framework are are the two,1387.799,4.321
biggest things that I'm working on in,1390.2,2.94
terms of,1392.12,4.02
um in terms of autonomous Ai and so yeah,1393.14,5.519
all this stuff is coming fast uh I think,1396.14,5.039
that's about it so thanks for watching,1398.659,4.5
um like And subscribe and support me on,1401.179,4.081
patreon if you'd like,1403.159,3.961
um for anyone who does uh jump in on,1405.26,3.72
patreon I'm happy to answer some,1407.12,3.96
questions for you even jump on video,1408.98,3.84
calls if you jump in at the high enough,1411.08,3.0
tier,1412.82,3.42
um I help all kinds of people I do have,1414.08,4.62
a few ndas that I have to honor,1416.24,3.9
um but those are those are those are,1418.7,3.0
pretty narrow and some of them are also,1420.14,3.18
expiring,1421.7,4.32
um so I've had people ask you know for,1423.32,5.099
for help just with writing prompts for,1426.02,5.58
chat GPT I've had people ask,1428.419,4.62
um simple things like how did you learn,1431.6,3.3
what you learned,1433.039,4.5
um all kinds of stuff uh but yeah so,1434.9,4.38
that's that thanks for watching and,1437.539,4.281
cheers everybody,1439.28,2.54