davidshapiro_youtube_transcripts / GPT5 Rumors and Predictions Its about to get real silly_transcript.csv
Stevross's picture
Upload 50 files
421fea8
raw
history blame contribute delete
No virus
25.5 kB
text,start,duration
good morning everybody David Shapiro,0.84,5.999
here with another spicy video things are,3.24,6.12
getting real interesting real fast,6.839,6.18
so uh my gbt4 predictions video,9.36,5.939
um was pretty popular so let's,13.019,4.68
do the same thing but first let's do a,15.299,5.701
quick recap for GPT for uh before,17.699,7.381
jumping into GPT 5.,21.0,6.599
so perhaps the spiciest thing that,25.08,5.039
happened after the release of chat gpt4,27.599,5.061
which is not the foundation model of of,30.119,6.96
current gpt4 but Microsoft research this,32.66,6.34
is Microsoft this is not some Podunk,37.079,5.341
shop this is Microsoft says uh the gpt4,39.0,7.559
represents the first Sparks of AGI and,42.42,7.5
that it performs uh strikingly close to,46.559,5.221
human level performance on many many,49.92,3.0
tasks,51.78,4.02
uh so given the breadth and depth of its,52.92,4.44
capabilities it could be reasonably,55.8,5.52
viewed as an early yet incomplete AGI,57.36,7.14
so that was the the kind of the thing,61.32,4.92
that the shot heard around the world so,64.5,3.299
to speak,66.24,3.419
um there's been a little bit other news,67.799,5.301
or numbers and features sorry about gpt4,69.659,7.5
so uh the the the base model of chat,73.1,7.18
gpt4 has an 8 000 token window which,77.159,4.681
that alone has been a game changer,80.28,3.18
doubling from four thousand to eight,81.84,3.919
thousand tokens unlocks a lot of,83.46,4.68
capabilities they already have an,85.759,4.54
officially announced 32 000 token window,88.14,5.04
so that is eight times larger than GPT,90.299,4.261
3.,93.18,5.34
uh which a 32 000 token window is gonna,94.56,6.3
be a even larger Game Changer there's,98.52,3.66
going to be so many things that you can,100.86,2.939
unlock with that,102.18,4.68
now in terms of parameter count we don't,103.799,5.761
really know but if I had to give you a,106.86,4.98
best guess looking at the scale of speed,109.56,4.8
because you look at Curie versus DaVinci,111.84,5.279
which are you know gpt3 models,114.36,4.5
um the dis the difference was about a,117.119,4.68
10x uh difference right and so then you,118.86,5.52
look at the relative speed of chat gpt4,121.799,5.1
versus chat gpt3 and it's like okay,124.38,5.579
maybe it's about 10 times again so if I,126.899,6.901
had to guess maybe chat gpt4 is about,129.959,7.021
you know in the 1 trillion parameter uh,133.8,6.6
range who knows but it is definitely,136.98,5.58
slower and the fact that it is slower,140.4,3.479
indicates that it's doing more,142.56,4.14
processing which means more parameters,143.879,6.0
or more layers a deeper larger model now,146.7,6.119
one other thing about gpt4 that most of,149.879,5.22
us haven't seen yet but they did,152.819,5.041
demonstrate it is that it is multimodal,155.099,5.28
it's not just text anymore it supports,157.86,5.879
images another thing about chatgvt4 is,160.379,5.101
it passed the bar exam and the 90th,163.739,3.601
percentile it has passed some other,165.48,3.839
tests in the 99th percentile so it's,167.34,3.42
pretty smart,169.319,3.601
um on some benchmarks it outperforms,170.76,5.759
most humans already and then from using,172.92,6.539
chat gpt4 it is qualitatively better at,176.519,5.281
pretty much everything it is a step,179.459,4.381
Improvement above everything that chat,181.8,6.9
GPT or the gpt3 and chat gpt3 can do and,183.84,7.2
then of course MIT released a study,188.7,4.98
showing that even just chat GPT 3.5,191.04,5.339
increased White Collar productivity by,193.68,6.66
40 percent and gpt4 is going to do the,196.379,5.94
same thing again so these models are,200.34,3.66
coming they're already having a huge,202.319,4.56
impact and people are just beginning to,204.0,4.92
learn how to use them so that's that's,206.879,6.36
all three GPT 3.5 and 4 just as a recap,208.92,8.94
before we jump into GPT 5. now I believe,213.239,6.42
this is the last slide of kind of,217.86,3.959
recapping the way that things are right,219.659,5.761
now so as many of you have heard there,221.819,5.7
has been an open letter signed by a,225.42,3.48
whole bunch of people calling that's,227.519,3.72
being dubbed the great pause people are,228.9,4.619
are calling for the great pause which is,231.239,4.86
a six-month moratorium on building,233.519,6.321
anything more powerful than gpt4,236.099,6.661
uh the reasons are safety ethics,239.84,5.5
regulations so on and so forth there's,242.76,6.24
also been a call for a a public uh,245.34,4.92
version,249.0,4.08
um basically the the CERN of AI which,250.26,5.699
when you look at how much uh money goes,253.08,4.86
into CERN it's billions and billions of,255.959,3.601
dollars a year,257.94,2.34
um,259.56,4.02
funding AI research at just one percent,260.28,5.639
of that is a drop in the bucket and,263.58,4.559
could probably produce public versions,265.919,6.661
you know uh uh common uh commonly owned,268.139,6.961
or fully open source versions,272.58,3.6
um just kind of like how the internet,275.1,3.42
was was developed,276.18,5.16
um you know actually at CERN or at least,278.52,4.44
the the World Wide Web,281.34,4.26
um HTML and so on,282.96,5.04
um there have been no major regulatory,285.6,3.78
movements yet which is really,288.0,2.58
interesting,289.38,3.96
so no governments as far as they know,290.58,5.76
even in the even in Europe have gone so,293.34,4.74
far as to say hey let's let's put the,296.34,4.2
kibosh on this for a little while which,298.08,5.22
usually uh the European Union and,300.54,5.7
European nations are a little bit more,303.3,4.5
um kind of ahead of the curve because,306.24,3.66
America is very reactionary I can't,307.8,4.08
remember the name of this uh Paradigm,309.9,5.4
but American politics and legislation is,311.88,5.879
is very deliberately only going to react,315.3,4.92
to things once they happen rather than,317.759,5.22
preemptively legislate whereas Germany,320.22,5.28
and the EU and France and other places,322.979,4.981
are much more likely to proactively,325.5,3.9
legislate things just on the,327.96,4.079
anticipation of a problem but even,329.4,4.859
Europe as far as I know has not put any,332.039,4.801
restrictions on language models and deep,334.259,5.341
learning so that's very interesting,336.84,5.579
according to uh some rumors this,339.6,5.24
ricocheted around read it a while ago,342.419,5.941
gpt5 is already being trained on 25,344.84,5.74
000 Nvidia gpus,348.36,4.98
um the the estimate was over 200 million,350.58,5.16
dollars worth of Nvidia Hardware is,353.34,4.919
being used to train gpt5 again that's a,355.74,4.019
rumor,358.259,3.601
um another big piece of news was Sam,359.759,4.081
Altman was recently on the Lex Friedman,361.86,5.64
podcast and what he said and this this,363.84,5.639
to me was from a technical perspective,367.5,4.68
the most interesting thing he said that,369.479,4.681
gpd4 did not come about from any,372.18,4.44
Paradigm shifts it was not a new,374.16,4.319
architecture or anything but that it,376.62,4.62
came about from hundreds and hundreds of,378.479,4.861
small incremental improvements that had,381.24,4.679
a multiplicative effect across the whole,383.34,6.06
thing which resulted in you know new new,385.919,5.761
ways of processing and preparing data,389.4,5.28
better algorithms so on and so forth and,391.68,5.88
so if gpt4 came about from incremental,394.68,5.459
improvements and nothing major maybe we,397.56,4.62
can expect more of the same for gpt5,400.139,4.141
that it's going to be uh ongoing,402.18,5.579
improvements of data pre-processing,404.28,6.0
um training patterns so on and so forth,407.759,4.621
so that's in the news,410.28,5.639
so now let's skip ahead to GPT 5,412.38,6.24
predictions and some rumors,415.919,5.701
uh all right so first top of mind when,418.62,5.4
is it going to come out uh obviously the,421.62,4.68
internet is Rife with rumors some of it,424.02,4.92
has more validity than others,426.3,4.8
um according to one website and I found,428.94,3.96
some of this with uh the help of Bing,431.1,4.14
actually ironically enough,432.9,5.22
um one website said that they expect GPT,435.24,6.0
4.5 to come out this September so that,438.12,4.38
would be a little bit quicker of a,441.24,2.88
turnaround,442.5,4.02
um another uh blog said that we should,444.12,6.359
expect gpt5 by the end of 2024 or early,446.52,6.78
2025 just given the the historical,450.479,5.241
pattern that seems pretty reasonable,453.3,5.22
when you consider that the testing cycle,455.72,5.86
for gpt4 was six to nine months so they,458.52,5.34
had it like rumor has it that um that,461.58,4.8
they had gpt4 like last summer or last,463.86,4.679
fall so maybe our predictions about when,466.38,5.46
gpt4 was completed was correct but the,468.539,5.581
but they they delayed it the release due,471.84,5.759
to testing who knows,474.12,6.18
um uh one Twitter user said that I I,477.599,4.38
don't know if this was in response to,480.3,3.959
the leaked Morgan Stanley document,481.979,5.22
um but basically you know uh and of,484.259,4.681
course it's on Twitter so take it with a,487.199,4.861
grant salt but basically that uh he said,488.94,6.24
that gpt5 is scheduled to be finished,492.06,6.539
with its training this December so that,495.18,6.48
kind of lines up with late 2023 early,498.599,5.82
2024 and then you add the training cycle,501.66,5.219
of six to nine months that puts it at,504.419,4.921
Mid 2024.,506.879,4.141
um so another thing that was interesting,509.34,4.8
is in the documentation openai has a few,511.02,6.06
snapshots of of the current models that,514.14,5.819
are set to expire in June which is,517.08,4.079
really interesting because they've never,519.959,3.781
done that before so my interpretation is,521.159,4.141
that they're going to say okay we're,523.74,3.84
going to expire these models,525.3,4.2
um but you can use them because they're,527.58,4.439
probably testing new ideas,529.5,3.72
um and then they're gonna you know,532.019,3.721
recycle those uh models or replace them,533.22,4.739
or upgrade them or some something,535.74,3.539
so,537.959,3.841
either way all of this all of these,539.279,5.101
rumors and some of the facts that we're,541.8,3.659
gleaning,544.38,3.06
really kind of point to a shorter,545.459,4.681
testing and release cycle which,547.44,4.74
considering open ai's close partnership,550.14,4.62
with Microsoft Microsoft is very,552.18,4.8
familiar with a regular Cadence right,554.76,4.92
you've got Patch Tuesday with Microsoft,556.98,5.28
server and Microsoft desktop they,559.68,5.219
regularly release new versions uh major,562.26,5.34
and minor versions of Windows and other,564.899,4.801
software so they're probably being,567.6,4.38
pushed to be more like a conventional,569.7,5.1
software vendor and of course that's the,571.98,4.859
direction it's all going right now large,574.8,4.08
language models and AI are new and shiny,576.839,4.321
but before long it's going to be a,578.88,3.899
commodity just like anything else just,581.16,3.06
like your smartphone just like your,582.779,4.68
laptop whatever so I think that I think,584.22,5.88
that the we we probably can't expect,587.459,4.32
some more traction by the end of this,590.1,4.64
year even if it's an incremental update,591.779,7.341
but certainly gpt5 I think that probably,594.74,7.539
mid 2024 at the earliest if I had to,599.12,5.68
guess but I think that the end of 2024,602.279,4.56
that's seems to be where the consensus,604.8,3.9
is right now I wouldn't put money on it,606.839,4.201
you never know but that seems to be the,608.7,3.66
consensus,611.04,4.5
window size so one of the biggest,612.36,5.82
functional changes of the jump from GPT,615.54,6.12
3 to 4 was going from a 4000 token,618.18,6.42
window up to an 8 000 token window with,621.66,5.88
being teased with a 32 000 token window,624.6,5.46
the amount of problems that I have been,627.54,4.38
able to solve and address just by,630.06,5.04
doubling the token window incredible so,631.92,5.099
if that pattern continues where it,635.1,4.44
either you know it goes up 2X or 8X or,637.019,4.801
whatever if you extrapolate that pattern,639.54,5.46
out then gpt5 could have anywhere from,641.82,7.5
64 000 tokens to 256 000 tokens so that,645.0,7.86
is roughly 42 000 words up to 170 000,649.32,5.639
words to put that into perspective I,652.86,4.56
think that Dune the original Dune was a,654.959,3.901
hundred and eighty thousand words so it,657.42,4.14
could read all of Dune in one go,658.86,4.919
um Couldn't Write it but when you,661.56,4.74
consider that most novels are 50 to 70,663.779,5.281
000 words that is more than enough uh,666.3,5.64
token window to read an entire novel and,669.06,5.16
write an another draft of it,671.94,5.1
so just digest that for a minute and,674.22,5.84
think about how much information that is,677.04,6.06
the number of scientific papers that,680.06,6.16
that could be so on and so forth now,683.1,6.96
when we talk about window size if we,686.22,5.58
assume that they overcome any,690.06,3.42
diminishing returns on memory,691.8,2.94
performance and compute because it's,693.48,3.299
going to be a trade-off right the the,694.74,3.96
larger those internal vectors are the,696.779,3.781
more memory it's going to take and that,698.7,3.3
one thing that I didn't include in this,700.56,3.719
because it looked a little too dry but,702.0,4.76
people are basically predicting that,704.279,5.581
gpt4 takes 10 to 40 times as much,706.76,5.62
compute as gpt3 and then if you,709.86,4.5
extrapolate that out again gpt5 will,712.38,3.959
take another 10 to 40 times as much,714.36,4.62
compute so the amount of compute is,716.339,5.101
ramping up exponentially possibly we,718.98,5.88
don't know but what if there's going to,721.44,5.519
be diminishing returns on an algorithmic,724.86,4.56
level so for instance maybe,726.959,4.44
um when you get the vectors that large,729.42,6.539
you might get a dilution which uh for,731.399,6.301
for rnns and other things basically,735.959,3.781
dilution I'm probably using the wrong,737.7,4.199
word but it kind of forgets what it was,739.74,4.02
talking about at the end of it so do we,741.899,3.781
need new attention mechanisms are we,743.76,4.68
going to need a new architecture or just,745.68,5.159
hundreds of more kinds of algorithmic,748.44,4.32
and incremental optimizations we don't,750.839,3.06
know,752.76,2.819
one other thing that we need to be,753.899,3.841
asking ourselves is how many tokens do,755.579,5.401
we actually need right because chat GPT,757.74,6.899
with 8 000 tokens is able to serve,760.98,6.0
ninety percent of our needs right now,764.639,5.041
only with very long conversations does,766.98,4.68
it forget the original like at the,769.68,3.899
beginning and also I think there's some,771.66,3.54
evidence that they have other memory,773.579,3.541
stuff going on because I've had some,775.2,3.72
pretty long conversations with chat GPT,777.12,3.839
now and I and I ask it like okay what,778.92,3.18
was the first thing that we talked about,780.959,3.361
and it remembers so I don't know if,782.1,3.539
they've got some search and retrieval,784.32,3.72
going on or some good summarization not,785.639,3.361
sure,788.04,2.7
but the point is,789.0,3.839
there's probably a diminishing returns,790.74,4.5
in terms of utility value in terms of,792.839,5.161
functional value to us the end user and,795.24,4.38
that includes um you know ordinary,798.0,4.079
citizens and civilians like us as well,799.62,4.32
as corporations in business and,802.079,4.32
Enterprise use cases more is not always,803.94,4.86
better so there might be a trade-off in,806.399,5.041
terms of speed cost and intelligence,808.8,6.24
right because what if what if they find,811.44,6.12
out that like okay 8 000 tokens actually,815.04,5.88
satisfies 95 of all use cases so let's,817.56,5.519
just make that 8 000 token,820.92,5.28
um model make it faster cheaper and,823.079,6.38
smarter and then you know maybe we have,826.2,6.36
uh models that are optimized for much,829.459,5.741
larger windows for specific kinds of,832.56,5.459
tasks like summarizing you know half a,835.2,5.28
million uh scientific papers not really,838.019,3.361
sure,840.48,3.419
but it is interesting because honestly,841.38,5.04
if they came out with a 256 000 token,843.899,5.341
model tomorrow I think that 99 of people,846.42,4.68
are never going to use that many tokens,849.24,4.44
could be wrong you know I probably sound,851.1,4.44
like some of the people who said like oh,853.68,3.659
nobody's ever going to use a desktop,855.54,3.419
computer so maybe I'm completely wrong,857.339,4.321
you know I I'm the first to admit I,858.959,4.38
frequently am wrong when I make some of,861.66,3.299
these predictions and sometimes I'm,863.339,4.201
hilariously wrong,864.959,5.041
um okay so moving on modality,867.54,5.4
for me the biggest shock of gpt4 was,870.0,4.62
that it was multimodal I didn't think,872.94,4.019
they were going to go there yet but gpt4,874.62,4.68
they demonstrated it it can you can give,876.959,3.961
it pictures it can spit out pictures,879.3,3.719
most people don't have access to that,880.92,3.9
yet it probably requires some work on,883.019,3.901
the API because if you're just sending,884.82,4.5
text over a Json you know a rest API,886.92,4.859
that's one thing sending images it's a,889.32,4.139
little bit different so I suspect that,891.779,2.881
they're probably working on the,893.459,3.661
Integrations with that,894.66,4.02
um which that's a lot to figure out I,897.12,3.06
don't envy them that problem it sounds,898.68,4.14
very tedious but when you look at the,900.18,4.8
fact that that open AI has Dolly they,902.82,5.699
have whisper gpt4 has images you do the,904.98,6.299
math I suspect oh and then you look at,908.519,4.981
um at how uh how much like text to video,911.279,4.68
and video to text is coming out I,913.5,5.16
suspect that gpt5 will be audio video,915.959,6.0
images and text if not more,918.66,5.88
uh but even still that would be a great,921.959,5.041
start so I was talking with some people,924.54,4.62
about this and what does that mean for,927.0,4.5
for vectors because if you can represent,929.16,7.02
an image or audio or video or text in,931.5,6.959
vectors those vectors are going to have,936.18,4.5
a lot more Nuance to them and so the,938.459,3.841
vector is the embedding right that is,940.68,3.899
the mathematical representation of the,942.3,4.5
input which is then used to generate the,944.579,4.56
output of these models so if you have,946.8,4.86
these multimodal vectors it's entirely,949.139,4.44
possible that these vectors are going to,951.66,4.26
be more abstract and human-like thoughts,953.579,5.101
inside the model which that has all,955.92,4.5
kinds of potential implications and I'm,958.68,3.3
not saying that it's going to magically,960.42,4.2
become sentient or or self-aware or,961.98,5.099
anything like that just that if you have,964.62,4.68
a more nuanced way of of representing,967.079,5.88
information about you know reality it's,969.3,5.099
entirely possible that that will unlock,972.959,6.481
entirely new capabilities Within gpt5,974.399,7.5
so one other big question is where are,979.44,4.56
they getting the data uh one of the,981.899,3.781
rumors was that they actually ran out of,984.0,3.36
high quality internet Text data that,985.68,3.3
they actually downloaded the entire,987.36,3.9
internet and after they filtered out the,988.98,3.479
garbage,991.26,2.819
um they realized there's not any more,992.459,3.0
text Data out there we need other,994.079,3.18
modalities and that's why they worked on,995.459,3.901
whisper that's why they worked on on,997.259,4.32
Dolly and so if that's the case then,999.36,3.839
maybe they're working on downloading all,1001.579,4.32
of YouTube all the podcasts all of every,1003.199,4.801
you know uh was it Dailymotion or,1005.899,3.62
whatever you know like basically every,1008.0,3.72
content provider out there that they can,1009.519,5.081
get their hands on and um legally and,1011.72,5.1
ethically get that data if it's under,1014.6,4.08
Creative Commons or other,1016.82,4.319
um open open licensing,1018.68,4.2
um so anyways,1021.139,4.56
this is it's really difficult to,1022.88,5.16
anticipate but just the fact that gpt3,1025.699,5.341
was was single modal and gpt4 is,1028.04,4.919
multimodal I think we should at least,1031.04,3.36
assume that that trend is going to,1032.959,2.941
continue again there might be,1034.4,3.539
diminishing returns they might find that,1035.9,4.5
most people don't need multimodal models,1037.939,3.961
and so then we might end up with a,1040.4,2.82
branching,1041.9,4.019
um kind of schema Nvidia does this by,1043.22,5.219
the way Nvidia publishes hundreds and,1045.919,3.901
hundreds of different models that have,1048.439,3.601
different specializations,1049.82,4.32
um and Nvidia is really good at cranking,1052.04,4.74
out very specific models for specific,1054.14,5.64
tasks whereas at least right now open at,1056.78,4.98
open AI seems to be focusing on one,1059.78,5.279
Flagship model that that uh that,1061.76,4.919
business model might change over time,1065.059,3.721
not sure,1066.679,3.961
um okay so intelligence and capabilities,1068.78,3.72
this is where I kind of really dive off,1070.64,5.279
into sci-fi land so if we look at the,1072.5,5.82
the relative performance of gpt3 versus,1075.919,7.201
gpt3 gpt4 sorry three versus four it was,1078.32,6.9
a huge jump in intelligence where it,1083.12,5.34
went from you know I think GPT 3.5 was,1085.22,4.68
able to pass the bar in the 10th,1088.46,3.599
percentile and then four was able to,1089.9,5.1
pass in a 90th percentile so that's a,1092.059,5.641
that's a Quantum Leap Forward so if we,1095.0,4.5
extrapolate that out then we could,1097.7,4.2
probably assume that gpt5 is going to,1099.5,5.1
pass all tests and all benchmarks in the,1101.9,4.26
99th percentile,1104.6,3.959
or or greater,1106.16,6.18
um if it if it's that smart then with,1108.559,5.161
the correct Integrations which are,1112.34,3.42
already working on Integrations chat GPT,1113.72,4.14
uh plugins right with the correct,1115.76,5.34
Integrations gpt5 could then outperform,1117.86,6.059
humans at 99 of all other tasks,1121.1,4.74
that includes stem jobs science,1123.919,4.321
technology engineering and math and so,1125.84,4.8
the the idea that I had was basically,1128.24,4.98
given the right Integrations and enough,1130.64,5.64
time you could ask gpt5 to design a,1133.22,5.04
spaceship and it will do it and then if,1136.28,3.36
you give it the right robotics it could,1138.26,3.24
build the thing too,1139.64,4.2
um so,1141.5,4.74
like I when I when I wrote that down I,1143.84,3.78
was like this is absurd then I'm like,1146.24,3.66
you know if we take out the the Quantum,1147.62,4.26
Leap from three to four and do that,1149.9,2.88
again,1151.88,2.58
this is actually within the realm of,1152.78,3.6
possibility I think and then another,1154.46,3.78
probably even more controversial,1156.38,4.56
prediction is that um it will be able to,1158.24,4.86
surpass humans in most Artistic,1160.94,4.2
Endeavors as well such as writing,1163.1,4.5
Symphonies uh composing stories,1165.14,5.399
um and even acting on stage given uh the,1167.6,4.98
correct rigging and framework so like,1170.539,4.14
maybe it can control a virtual actor,1172.58,4.38
like in the Unreal Engine or a robotic,1174.679,4.021
actor because you look at Disney Disney,1176.96,4.079
is making very very very life-like,1178.7,4.26
animatronics,1181.039,4.861
um so I suspect that one way or another,1182.96,4.8
human actors are going the way of the,1185.9,4.139
dinosaurs just full stop,1187.76,3.48
um why because human actors are,1190.039,2.461
expensive,1191.24,1.86
um,1192.5,2.88
and most actors have signed away writes,1193.1,4.74
their likenesses by now anyways uh many,1195.38,4.38
of them unwittingly this this came up in,1197.84,4.74
conversation where uh voice actors and,1199.76,4.38
even some actors that are getting older,1202.58,3.42
have very deliberately signed away their,1204.14,3.24
likeness so that they can be,1206.0,3.78
immortalized in AI,1207.38,7.08
um so if if any of this is remotely what,1209.78,7.139
happens with gpt5 I can understand why,1214.46,4.92
people are calling for a moratorium,1216.919,3.901
um but it's going to happen because,1219.38,4.02
competition is there right if open AI,1220.82,4.14
doesn't do it someone else is going to,1223.4,3.3
and if America doesn't do it some other,1224.96,3.36
country is going to do it and nobody,1226.7,3.18
wants to fall behind so I really don't,1228.32,2.94
think a moratorium is going to happen,1229.88,3.36
but that begs the question what does,1231.26,4.86
happen is this AGI is this Singularity,1233.24,4.86
is this you know are we going to get,1236.12,3.48
regulation is it are we going to get,1238.1,3.06
competition and of course if you're,1239.6,3.12
familiar with my channel you saw that I,1241.16,4.98
predicted AGI within 18 months if GPT 5,1242.72,6.54
qualifies we could have gpt5,1246.14,8.18
mid 2024 so the timing is is there,1249.26,5.06
so all that being said buckle up as a,1254.6,6.06
commenter said in a previous video it's,1258.26,5.22
about to get silly yes that's pretty,1260.66,4.56
much all we can really guarantee right,1263.48,3.48
now is that it's about to get real silly,1265.22,5.3
real fast thanks for watching,1266.96,3.56