davidshapiro_youtube_transcripts / Automating Science with GPT4 attempting and failing to perform autonomous literature review_transcript.csv
Stevross's picture
Upload 50 files
421fea8
raw
history blame contribute delete
No virus
15.2 kB
text,start,duration
what is up everyone David Shapiro here,0.359,8.16
and we are working on science,4.2,7.68
okay so where we left off what I was,8.519,6.921
working on was my um,11.88,6.06
basically automating science and a lot,15.44,4.48
of you have have pointed out that like,17.94,3.9
you would prefer that I keep working on,19.92,4.019
autonomous cognitive entities and Raven,21.84,4.199
and stuff like that but what I wanted to,23.939,3.84
say is that I'm working with some really,26.039,4.441
like the world leaders in cognitive,27.779,4.32
architecture at least the ones that are,30.48,4.44
not already in Academia and,32.099,5.401
um in in inside the establishment so,34.92,4.86
those of us that are outside are working,37.5,5.18
together and so we'll have some,39.78,6.54
demonstrations in the coming weeks and,42.68,5.08
the stuff that these guys are working on,46.32,4.559
the level of autonomy that these,47.76,6.18
machines have is incredible,50.879,5.041
um so with that being said I am moving,53.94,5.939
on to science and the reason is,55.92,7.439
um one I have an ability to make sign uh,59.879,5.521
make this stuff more accessible and I,63.359,3.661
get so many messages from people that,65.4,3.539
like either have never coded or haven't,67.02,4.62
coded in many years who are inspired by,68.939,5.521
my work to get back in and like the more,71.64,4.74
people we we have participating in,74.46,4.019
artificial intelligence and science the,76.38,3.419
better,78.479,3.601
um yeah so that that's that so that's,79.799,4.32
that's my mission now,82.08,5.96
um with that being said I wanted to,84.119,6.481
revisit this idea of regenerative,88.04,4.66
medicine because that's important to me,90.6,3.42
etc etc,92.7,5.64
so uh last time I I what we did was we,94.02,6.959
found some sources so we've got the the,98.34,4.56
open Journal of regenerative medicine,100.979,5.161
we've got bio archive and then the,102.9,6.6
regenerative medicine topic on on nature,106.14,5.82
that's still really broad,109.5,4.74
um because like let's say you're you're,111.96,4.92
an orthopedic surgeon who focuses on,114.24,4.68
shoulders or you know cervical joints or,116.88,2.879
whatever,118.92,3.42
uh or a researcher who has a particular,119.759,4.621
body part that you're focusing on you're,122.34,3.36
going to want to perform a literature,124.38,2.82
review that's a little bit more narrow,125.7,4.02
so what I started doing was looking for,127.2,4.32
data sources that were going to be a,129.72,3.659
little bit more specific to what we want,131.52,4.799
now I also got access to Bard,133.379,5.58
and I gave Bard and Bing the same exact,136.319,4.621
thing and Bing was just like okay yeah,138.959,3.721
here's here's a few here's a few,140.94,3.9
examples and uh the one that it gave,142.68,3.6
here,144.84,4.92
um was directly relevant and it was,146.28,6.0
um it's over a year old but still,149.76,4.32
um or not over a year old it's about six,152.28,3.0
months old,154.08,3.9
um still not bad and then I asked,155.28,4.44
I asked Barton it's like I can't help,157.98,4.02
what that is I'm only a language model,159.72,5.7
um can you search the internet,162.0,6.3
uh like aren't you a search engine now,165.42,5.76
Bard is still in beta so yes,168.3,4.62
um,171.18,5.72
okay it looks like it doesn't even have,172.92,3.98
yeah it's this is this is like super,176.94,6.18
super basic okay so Bart is not useful,179.58,6.36
um now that being said uh I did find a,183.12,5.1
few things for you know cartilage,185.94,4.079
regeneration,188.22,4.2
um Rehabilitation so this is all,190.019,6.121
specific to shoulders and stem cells,192.42,6.0
um under regenerative medicine if we go,196.14,4.2
a little bit broader,198.42,4.14
um we get up to actually I guess we,200.34,3.66
still only get five results under the,202.56,3.48
nature Journal of regenerative medicine,204.0,4.56
which is fine uh bio archive gives us a,206.04,4.32
little bit more under stem cell so,208.56,4.02
basically the next step is I'm going to,210.36,5.04
download and process some some uh papers,212.58,4.92
that are specific to this and then I'll,215.4,4.02
show you what happens next,217.5,5.04
um basically I was chatting with some uh,219.42,5.58
some of my friends and we were talking,222.54,6.119
about how loquacious how verbose the,225.0,5.519
models are and we're working on that,228.659,3.121
concept of sparse priming,230.519,3.78
representations and we came up with a,231.78,4.739
conversational model that is also sparse,234.299,4.381
and very concise and that is Morden,236.519,4.261
Solas from Mass Effect so we're,238.68,4.5
basically creating a modern Solis uh,240.78,4.86
chatbot model and it's great it's also,243.18,3.66
really hilarious so I'll show you that,245.64,3.36
in just a moment but yeah we'll be right,246.84,3.72
back,249.0,3.959
okay I wanted to add a quick note I just,250.56,4.98
found journals.sage Pub which has an,252.959,5.161
open access feature and is actually,255.54,3.9
pretty nice so I'm going to download a,258.12,3.06
few more from here also I've got all the,259.44,3.539
sources documented in the readme which I,261.18,3.72
am working on updating so don't worry,262.979,3.961
all of this will be documented,264.9,4.56
um for ease of access all right be right,266.94,3.36
back,269.46,2.94
okay we're about ready to test here let,270.3,4.679
me sit up actually okay,272.4,4.5
um but yeah so here is the system,274.979,5.041
message it doesn't work in 3.5 and,276.9,6.06
that's not uh surprising because 3.5,280.02,4.8
they even documented that it doesn't pay,282.96,4.56
as much attention to the system message,284.82,4.14
um but in this case I say I am more,287.52,3.179
modern soul is scientist solarian,288.96,3.0
currently performing literature review,290.699,3.541
reading many papers taking notes as I go,291.96,4.56
user assisting by submitting research,294.24,4.26
pages incrementally will respond with,296.52,3.72
Salient notes hypotheses possible,298.5,3.3
research questions suspected gaps in,300.24,3.42
scientific literature and so on whatever,301.8,3.42
is most relevant important note,303.66,3.3
responses will be recorded by user to,305.22,3.84
review later responses must include,306.96,4.38
sufficient context to understand goal,309.06,3.72
always same Advanced science solve,311.34,2.7
problems help people respond should,312.78,2.76
follow same linguistic pattern focus on,314.04,3.0
word economy convey much without,315.54,3.06
Superfluous words avoid missing,317.04,3.9
important details so here I've got just,318.6,3.92
a random page,320.94,4.68
from uh from one of the papers that I,322.52,5.56
got and you can see that it the PDF,325.62,4.68
scraping didn't really work because it's,328.08,4.679
missing a lot of spaces so let me just,330.3,3.899
show you how good this is it's a little,332.759,4.801
bit slow because it's gpt4,334.199,7.161
um but yeah so it fix it it,337.56,6.6
fixes the spelling and stuff,341.36,5.38
um and so on,344.16,5.52
and it basically just um summarizing it,346.74,5.58
as it goes let's see post-operative,349.68,4.739
Rehabilitation and mobilization for four,352.32,4.46
weeks,354.419,2.361
so there you go not only is it not only,358.62,6.06
is it re-summarizing it but it is um,361.56,6.66
but it is it's uh like cleaning cleaning,364.68,4.62
it up,368.22,3.84
um and it's posing a research question,369.3,5.04
so if we go back to the file and then,372.06,4.74
add the next page,374.34,5.28
to the chat,376.8,5.94
um basically I'm having chat gpt4,379.62,7.46
read it and restate it as it goes,382.74,6.84
and it will accumulate more and more,387.08,4.059
insights and what I'm going to do with,389.58,2.64
the script and I'll show you the script,391.139,2.701
in a second is actually record this,392.22,4.5
output along alongside the pages but I'm,393.84,6.799
showing you oh come on,396.72,3.919
there we go statistical analysis,400.86,4.559
performing SPS statistics,402.36,4.5
um,405.419,3.12
paired t-test,406.86,3.48
Etc et cetera so you can see that it it,408.539,5.1
goes uh pretty quickly results,410.34,5.639
um basically restates the results uh,413.639,4.881
pretty quickly,415.979,2.541
um also nice and clean,419.699,3.021
possible research questions factors,423.24,5.48
there you go,426.0,2.72
so there we have it it's ready to go now,429.24,6.36
with 17 PA papers,432.539,5.88
um and quite a bit of text,435.6,3.599
um,438.419,2.701
it's probably going to be prohibitively,439.199,4.22
expensive yeah because you see this is,441.12,5.4
uh two and a half million lines,443.419,3.881
um,446.52,4.32
of uh of text so I'm probably not going,447.3,6.06
to have it read the entire thing,450.84,3.9
um because this,453.36,3.38
let me see how long this original one is,454.74,7.799
2023.01 16 522 18v1,456.74,5.799
so that was oh that was this one that,462.66,4.379
I've Wait no that's the that's the Json,465.0,4.44
I need the text,467.039,4.321
um let's see,469.44,5.599
121 kilobytes,471.36,3.679
so the number of new pages that shows up,475.62,4.859
here oh it's got it at the end so it's,477.78,4.38
only 50 pages,480.479,2.94
um so this will be a little bit,482.16,4.2
expensive but we'll see also one thing,483.419,5.34
that I can probably work to exclude is,486.36,4.44
all the citations,488.759,3.961
um but maybe that's not actually a bad,490.8,5.04
bad thing to to include,492.72,4.86
um but yes let's run the script let me,495.84,4.199
show you the script real quick so read,497.58,4.98
papers it's super straightforward I've,500.039,5.94
got the same chat GPT completion we've,502.56,6.72
got gpt4 I set the temperature to zero,505.979,4.321
um,509.28,4.8
it'll save it all out let's see and then,510.3,7.56
here for file in an OS Lister papers,514.08,6.3
underscore Json if file ends with Json,517.86,5.82
file path join load the load the file,520.38,7.62
and then four page in data Pages which,523.68,6.779
is what you can see here so original,528.0,4.56
file name pages and then there's the,530.459,3.961
embedding page number in text so they,532.56,3.6
are in order so basically it'll be,534.42,4.14
reading each page one at a time and kind,536.16,5.52
of thinking as it goes so this is a very,538.56,5.339
simple cognitive cognitive architecture,541.68,5.4
that will basically just pretend to be,543.899,5.88
a research scientist reading papers as,547.08,5.52
it goes kind of jotting down notes we,549.779,4.861
could probably do a little bit more,552.6,5.22
concise but it's doing a good job of,554.64,6.3
summarizing the most Salient details,557.82,4.38
um and that's probably as far as I'll,560.94,2.64
get today,562.2,3.72
and then it appends it all so on and so,563.58,5.04
forth and then up here uh basically what,565.92,5.7
I do is is I'll I'll try and and do the,568.62,5.76
the output but if it if it is too long,571.62,5.219
then I'll just remove the oldest message,574.38,6.0
so let's see what happens with this,576.839,6.421
um all right so clear screen zoom in a,580.38,6.0
little so you can see it python step,583.26,5.1
three read papers,586.38,4.32
and let's see how it goes so because,588.36,3.9
we're not I'm not using the streaming,590.7,3.24
API we're only going to see the output,592.26,4.56
once uh Morden is done,593.94,3.54
um,596.82,3.18
and then I'll let this run at least for,597.48,4.919
a little bit because gpt4 is so,600.0,5.36
expensive I probably won't do all 17,602.399,5.641
Pages let's see so we're all night 28,605.36,5.26
cents right now so we'll see about how,608.04,5.58
expensive they are per run,610.62,4.98
um there we go,613.62,4.38
objective methods results conclusion,615.6,4.44
okay cool,618.0,5.04
so just restates it very very succinctly,620.04,4.68
we'll watch it a couple more times,623.04,3.78
you're probably watching on 2x anyways,624.72,4.619
so and actually some of you watch on 3x,626.82,4.38
I don't know how you understand me if,629.339,3.781
you watch that fast,631.2,5.759
um but yeah so while that's running,633.12,6.48
let's see,636.959,5.641
do a quick refresh it takes what like,639.6,5.28
five minutes to update,642.6,4.2
so we'll see,644.88,2.959
um,646.8,5.039
objective design and exosuit there we go,647.839,6.881
conclusion exomuscle design,651.839,5.521
so it looks like it's kind of restating,654.72,5.34
it it could it could be problematic to,657.36,4.52
keep feeding it in,660.06,4.62
talks about the design,661.88,7.44
thermoplastic polyurethane TPU coated,664.68,4.64
and also we should have the uh chat logs,672.48,4.919
here,676.32,3.24
okay so the chat logs are just are are,677.399,4.56
just um saving the whole thing I guess,679.56,5.219
what I should do is also save the um,681.959,7.641
save the user input but let's see,684.779,4.821
so it keep keeps restating the whole,694.98,6.06
paper which I don't think is a good use,697.92,4.32
of,701.04,3.12
of time,702.24,4.64
all right,704.16,2.72
it's not not a good use of tokens here,710.399,3.12
I'm going to pause it real quick and see,712.44,2.399
if I can investigate what's going on,713.519,2.88
here,714.839,3.421
okay I changed the system message a,716.399,3.601
little bit and it works um much better,718.26,4.74
so basically I just added a um a note,720.0,4.68
here at the end focus on last page no,723.0,3.18
need to restate all notes every time,724.68,4.68
prefer to keep notes concise succinct,726.18,5.279
um and then the exponential back off,729.36,4.68
that I added last time is actually,731.459,5.94
really helpful because the gpt4 API is,734.04,6.6
so busy it you're liable to time out but,737.399,6.06
also you might also get rate limited so,740.64,6.0
the longer between times but so anyways,743.459,5.761
here let me show you what I mean,746.64,4.439
um,749.22,4.64
there we go,751.079,2.781
it's better it's not perfect so in this,754.56,4.44
case it summarizes the first page and,757.079,3.481
then we move on to the second page where,759.0,3.959
it talks about the design comparison and,760.56,3.54
findings,762.959,3.361
so we're getting there,764.1,5.88
and then uh testing and testing results,766.32,6.24
conclusion it's a pretty short paper one,769.98,4.32
thing that occurs to me is that with the,772.56,3.36
longer because I started this experiment,774.3,4.2
before I had access to gpt4 so I might,775.92,5.039
need to go back to the drawing board and,778.5,5.16
make use of that 8 000 token,780.959,5.101
um window so rather than submitting it,783.66,4.859
one page at a time which was you know,786.06,5.82
when I only had 4 000 tokens what I,788.519,6.541
might do is redo this and kind of do it,791.88,5.88
as like one big chunk,795.06,6.36
um to to summarize it but another rule,797.76,6.3
of thumb that I have is is don't try to,801.42,5.159
force the context window because if it,804.06,5.459
doesn't work it doesn't work and the,806.579,4.38
fact that it keeps restating this I,809.519,2.521
think this might not be the right,810.959,4.32
approach but the idea is still there,812.04,5.64
where it's like let's let's use the,815.279,4.141
model to do as much of the science as,817.68,5.339
possible so today was kind of a wash but,819.42,7.34
we'll see all right thanks for watching,823.019,3.741