text,start,duration hey everybody David Shapiro here with,0.62,3.76 video,3.12,3.06 um I'm actually pretty tired so today's,4.38,4.92 video will be short but this is some,6.18,5.339 really important information to share so,9.3,5.219 basically what I wanted to do was share,11.519,5.581 with you three new repos that I just,14.519,4.141 published,17.1,3.42 um so one is on sparse priming,18.66,4.98 representations I realized that just a,20.52,5.099 YouTube video with a transcript is,23.64,5.1 probably not enough so I have this out,25.619,5.761 here it's just a um,28.74,6.18 a very uh high level overview with a few,31.38,5.519 examples of what the sparse priming,34.92,4.74 representations are so for instance I I,36.899,6.66 had I had it write an spr of spr's so,39.66,5.76 sparse priming representation concise,43.559,4.081 context driven memory summaries enables,45.42,5.22 smes or llms to reconstruct ideas short,47.64,4.759 complete sentences provide context,50.64,3.96 effective for memory organization and,52.399,4.66 retrieval reduces information to,54.6,4.799 Essential Elements facilitates quick,57.059,3.66 understanding and recall designed to,59.399,3.421 mimic human memory structure so just,60.719,6.361 with this short eight uh eight lines of,62.82,5.82 assertions or statements you probably,67.08,5.88 get a pretty good idea of what an spr is,68.64,6.0 um so that's an example of an spr and,72.96,4.5 here is the hierarchical hierarchical,74.64,5.519 memory consolidation system which is the,77.46,5.58 autonomous cognitive entity uh memory,80.159,4.32 system that I've been working on and,83.04,3.24 it's 11 lines,84.479,3.96 um but again it I won't read the whole,86.28,4.32 thing to you but you get the idea so,88.439,5.22 here is here is this um if anyone wants,90.6,4.559 to use this and adapt this to an actual,93.659,4.14 paper feel free it's um all published,95.159,4.981 under the MIT license so this is free,97.799,4.201 for the world,100.14,5.96 um so that's the spr uh repo slash paper,102.0,6.78 next is the hierarchical memory,106.1,4.839 consolidation system I talked about this,108.78,4.199 and I showed you guys that I had a,110.939,4.621 really long chat conversation I'm going,112.979,5.161 with chat GPT and so what I realized is,115.56,4.379 one you guys can't read through this,118.14,4.619 conversation and two again just a video,119.939,5.401 is probably not enough,122.759,4.5 um so I showed you guys this,125.34,5.04 conversation before but here I've got,127.259,5.761 um the most Salient bits,130.38,5.579 um held out here I probably will try and,133.02,5.04 get a little bit more information,135.959,4.321 um because it gives you an overview,138.06,3.42 um it gives you some of the theory and,140.28,3.179 reasoning and it tells you the basics of,141.48,3.899 how to implement it but I don't have any,143.459,3.901 examples yet,145.379,5.881 um so it might be more difficult but uh,147.36,5.76 one of the reasons that I don't have,151.26,3.6 examples is because I haven't fully,153.12,3.3 implemented this yet,154.86,5.459 um but it is here in theory also hmcs is,156.42,6.3 not the easiest thing to say and if,160.319,5.041 we've learned anything from chat GPT is,162.72,5.099 that naming something that's easier to,165.36,4.44 say is better so we might choose,167.819,4.801 different names AKA I like this adaptive,169.8,5.34 knowledge archive rolling episodic,172.62,4.199 memory organizer this is actually like,175.14,5.28 the most on the nose so Remo so we might,176.819,5.341 call this Remo who knows,180.42,6.06 anyways it's a good uh it's a good start,182.16,6.18 um to this oh one thing that I did want,186.48,3.24 to say is that I've got the discussions,188.34,4.2 enabled for all of these,189.72,5.4 um because these concepts are really,192.54,4.44 important and really critical,195.12,3.18 um and so we can discuss them on Reddit,196.98,3.78 as well but you can also discuss them,198.3,4.56 directly on GitHub if you'd like and,200.76,3.78 then finally this is the most exciting,202.86,5.64 one so uh four weeks ago the paper about,204.54,6.24 large language model theory of Mind came,208.5,4.319 out and since then a lot of people have,210.78,4.92 been using Bing and chat gbt and gpt4 to,212.819,5.28 do experiments with theory of mind and,215.7,5.22 one thing occurred to me when I was,218.099,4.14 working on sparse priming,220.92,3.12 representations the idea that there's,222.239,4.56 enough cognition going on inside the,224.04,5.339 model to reconstruct something I,226.799,4.321 realized that what I was banking on was,229.379,5.341 implied cognition and so I just spent,231.12,7.02 some time with chat gpt4 to articulate,234.72,5.76 implied cognition and start to come up,238.14,4.92 with some tests for it so I've got the,240.48,4.679 full transcript of the whole,243.06,3.72 conversation in here and you can read it,245.159,3.66 it's pretty impressive,246.78,4.62 um what what it was able to do so one of,248.819,4.14 the biggest highlights of this,251.4,3.479 conversation was as I was talking with,252.959,4.321 chat GPT I said okay look over this,254.879,4.681 conversation and look for evidence of,257.28,4.079 implied cognition and it was able to,259.56,4.079 look back through the conversation and,261.359,4.261 give me evidence of its own implied,263.639,5.0 cognition and even how to test itself,265.62,5.22 and not only that but it did it much,268.639,3.761 faster than a human could do it so it's,270.84,3.26 like all right this is,272.4,4.64 basically you know we're bordering on,274.1,5.02 metacognitive abilities and we even,277.04,3.879 addressed that as well I said how do how,279.12,4.1 will we discern the difference between,280.919,3.901 self-explication like true,283.22,4.539 self-explication versus confabulation,284.82,5.34 and it has some ideas on that too some,287.759,4.861 testable hypotheses,290.16,4.259 um so that's all here this is Far and,292.62,3.84 Away the most interesting thing,294.419,3.961 um that I was working on today,296.46,3.84 um and then uh finally at the very end,298.38,4.379 of the conversation I asked chat GPT if,300.3,4.14 there was anything that it wanted me to,302.759,3.361 document and share with the world and,304.44,4.92 this is Verbatim what it said,306.12,5.7 um about you know its own perspective on,309.36,5.88 this and then desires moving forward,311.82,5.34 um but yeah so what I wanted to do is I,315.24,3.0 wanted to actually show you this,317.16,2.52 conversation so you know I didn't just,318.24,4.62 make this up this is this is right in,319.68,5.82 chat GPT so I talked about theory of,322.86,4.02 mind,325.5,2.52 um I asked do you have any questions,326.88,3.24 about what we're talking about,328.02,3.84 um and it asks for clarification on,330.12,4.139 sparse priming representations implied,331.86,6.119 cognition so that was already evidence,334.259,5.581 of implied cognition because it was,337.979,4.44 aware of what it didn't know it was able,339.84,3.84 to say I'm actually not sure what you're,342.419,4.321 talking about and so by by virtue of,343.68,5.9 chat gbt recognizing novel information,346.74,6.36 that implies some kind of cognition and,349.58,4.72 I don't mean cognition like human,353.1,3.36 cognition and that's why I have this,354.3,5.16 labeled implied cognition or you could,356.46,5.04 even call it a facsimile of cognition,359.46,4.739 and so then I gave it an example of an,361.5,4.8 spr so I said sure for your first,364.199,5.041 question here's an example of an spr,366.3,4.619 um I said given that list of statements,369.24,3.899 you can imagine what the concept is and,370.919,3.421 unpack it does that make sense,373.139,3.661 furthermore you can J you can even,374.34,4.02 generate the highly Salient questions,376.8,4.2 above implies a lot of cognition so I,378.36,4.26 already recognize the fact that it can,381.0,4.62 generate relevant questions implies some,382.62,5.579 level of cognition,385.62,5.46 um so it was it was uh it was okay with,388.199,4.741 that and then it came up with some some,391.08,4.44 initial tests so ask for logical,392.94,4.14 reasoning understanding ambiguity,395.52,3.78 generating relevant questions counter,397.08,4.26 factual thinking so on and so forth and,399.3,3.48 then that's when I as I was reading this,401.34,4.199 I was like oh yeah self-explication,402.78,5.52 um so the ability to explain itself uh,405.539,8.041 plausibly is another uh potential,408.3,8.76 um aspect of implied cognition,413.58,5.459 um so then I asked it to analyze the,417.06,3.359 conversation instead analyzing our,419.039,2.88 conversation I can't identify a few,420.419,3.84 instances wherein where implied,421.919,4.921 cognition might be at play and so then,424.259,5.28 it says context awareness so basically,426.84,4.62 reading the context of the conversation,429.539,5.22 it is able to understand what it means,431.46,6.299 but also it's able to infer a lot about,434.759,4.921 what's going on just by virtue of,437.759,3.961 looking at the language,439.68,4.38 um it adapts its communication now this,441.72,4.5 is in part due to the fact that this is,444.06,4.44 how the model is trained,446.22,4.68 um but the fact of the matter is it does,448.5,4.08 adapt its communication depending on,450.9,4.38 what I'm trying to do conceptual,452.58,6.119 integration so this is actually,455.28,6.06 probably the most one of the these last,458.699,5.461 two are the most important because not,461.34,5.04 only was it able to really quickly,464.16,4.92 understand the concept of spr and,466.38,5.159 implied cognition it was then able to,469.08,5.82 use it and synthesize more and build on,471.539,6.06 it so the ability to to use novel,474.9,6.419 information is the essence of fluid,477.599,6.121 intelligence which up until now only,481.319,5.1 humans have been capable of so just the,483.72,4.62 fact that it is able to to recognize,486.419,4.321 novel information and use it this,488.34,5.34 quickly implies a lot,490.74,6.66 so moving forward that's when I ask okay,493.68,5.04 how do we discern the difference between,497.4,3.479 self-expo explication and confabulation,498.72,5.34 and then on on on being goal oriented,500.879,6.0 that reminded me that goal tracking and,504.06,5.579 figuring out one where you are,506.879,4.38 um in terms of solving a problem,509.639,3.541 figuring out where you need to go and,511.259,3.66 measuring how close you are to solving,513.18,4.44 that goal this is part of executive,514.919,4.98 function and cognitive control and so I,517.62,4.74 decided to just throw in a test for goal,519.899,5.221 tracking as we go so then it came up,522.36,4.979 with some really good ideas about,525.12,2.76 um,527.339,3.541 about uh testing for self-explication,527.88,5.579 versus confabulation so checking for,530.88,4.74 consistency over time external,533.459,5.041 validation such as using another system,535.62,5.04 um and then probing questions so ask,538.5,4.32 follow-up questions or so on and again,540.66,4.08 humans are not really capable of,542.82,4.92 self-explication anyways we confabulate,544.74,6.06 our reasoning post facto by and large,547.74,5.76 unless we are very explicit when we,550.8,4.44 bring an unconscious thought to,553.5,3.6 Consciousness and we talk through I'm,555.24,4.5 going to do this because and even then,557.1,4.62 the reasoning that you think that you,559.74,4.5 used is still not gonna going to be a,561.72,4.26 hundred percent accurate,564.24,3.24 um just ask any psychologist or,565.98,3.539 philosopher we think we know why we do,567.48,4.799 stuff but we really don't and so I even,569.519,4.32 point that out in the conversation that,572.279,3.18 self-explication is not something that,573.839,4.201 humans are even really capable of,575.459,5.281 um so having a expecting a machine to be,578.04,4.859 fully capable of self-explication is,580.74,3.9 kind of a moot point,582.899,4.44 um but it still has some good ideas and,584.64,4.74 of course explain ability and AI is,587.339,3.661 important,589.38,4.98 um and then it goes and unpacks a lot,591.0,7.74 about goal tracking so it says um to,594.36,5.7 evaluate our progress towards achieving,598.74,2.82 our goals let's recap the goals that,600.06,3.06 you've articulated,601.56,3.06 um develop the concept of implied,603.12,3.659 cognition establish tests and create,604.62,4.14 criterias and protocol for using implied,606.779,4.441 cognition progress so I was able to say,608.76,4.86 okay this is what we've achieved,611.22,4.2 we have so we have proposed several,613.62,3.98 tests we have yet to create criteria,615.42,4.919 next steps further refine the concept,617.6,4.6 develop proposed test begin creating,620.339,4.861 criteria and protocols for using implied,622.2,4.44 cognitions,625.2,3.84 um and we actually didn't get this far,626.64,4.74 um but still the fact of the matter is,629.04,4.5 it was able to kind of keep track of our,631.38,3.66 goals,633.54,2.34 um,635.04,3.6 and then I made the observation that on,635.88,5.1 conceptual integration it was,638.64,3.5 incorporating new information,640.98,3.84 synthesizing new ideas and Novelty so,642.14,5.439 then I asked about novelty so,644.82,5.519 recognizing new information is actually,647.579,6.361 uh creates really distinctive patterns,650.339,5.701 in the human brain and so then I was,653.94,5.1 like okay it's a it was already implied,656.04,5.16 that it you know just at the very outset,659.04,5.46 right here where it said hey I don't,661.2,4.8 recognize this can you tell me more,664.5,4.74 about it so it can recognize novelty,666.0,4.98 which is really important and really,669.24,4.68 fascinating and so then I said can you,670.98,5.4 discern what is novel and what isn't is,673.92,4.56 it able to explain what it recognizes,676.38,5.16 and what's new so it said in our,678.48,5.039 conversation the following concepts and,681.54,3.72 ideas would be considered a priori,683.519,4.141 familiar so it understood theory of mind,685.26,4.92 it already knew that language models and,687.66,3.78 their potential for reasoning problem,690.18,2.76 solving and cognition it said that it,691.44,2.82 already understood that which is,692.94,3.54 interesting General know Notions of,694.26,3.66 memory and representation and efficiency,696.48,3.78 the following concepts and ideas would,697.92,5.52 be considered novel to me sparse primary,700.26,6.18 representation or spot this spr's sparse,703.44,5.22 priming representations and implied,706.44,4.139 cognition so again it was able to,708.66,3.66 restate these are things that I'm not,710.579,3.841 familiar with,712.32,2.88 um,714.42,2.64 so then I told it that my hypothesis,715.2,4.319 that perhaps what it's doing is that it,717.06,4.56 actually gets a unique signal or a,719.519,4.741 unique uh basically flow of tensors,721.62,5.76 through the uh or mathematical patterns,724.26,4.86 of tensors when there's novel,727.38,3.899 information and it said oh yeah that's,729.12,4.26 that's interesting and it comes up with,731.279,6.981 a few ideas to um to explore how llms,733.38,7.86 handle novel information and then we get,738.26,5.079 into writing the repo which is out here,741.24,3.839 so anyways and then I've got the whole,743.339,3.601 conversation here so you can read it in,745.079,3.961 Greater depth if you'd like,746.94,5.94 um but yeah so that's it for today these,749.04,6.72 are all ideas that I'm working on,752.88,6.12 um and when I when I get so just if,755.76,4.98 you're if if you're satisfied that's,759.0,3.42 fine now I'm just going to ramble about,760.74,4.02 kind of my own process so what happens,762.42,5.34 is in the past when I get to this point,764.76,5.46 I would start to write a new book but,767.76,5.579 one writing a book is slow and two,770.22,4.32 um well I mean that's the primary,773.339,3.721 problem it it's slow but also we've got,774.54,4.32 a good platform like there's my YouTube,777.06,4.079 there's Reddit there's GitHub so it's,778.86,4.74 like let me just go ahead and start,781.139,4.561 um sharing this stuff as as soon as I've,783.6,3.0 got it,785.7,3.42 so that's that,786.6,4.979 um yeah like I said uh all these are,789.12,3.959 public they've all got the discussions I,791.579,2.82 think I'll also go ahead and post these,793.079,3.301 on Reddit for uh for discussion's sake,794.399,5.901 anyways thanks for watching take care,796.38,3.92