audio
audioduration (s) 3
27.3
| transcription
stringlengths 8
259
|
---|---|
way this works now is we can have an x which is say like 2.0 3.0 then we can |
|
so here we see grad zero so as a base case we need to set both.grad to 1.0 |
|
All right, so I have now
should have been sent an email. So let me make sure that I
got that email and I did, |
|
and the output of this is now 5 by 27 so we can see that the shape of this |
|
to questions with other questions it might ignore your question it might just |
|
we can actually create that using mp not mp sorry torch dot range of 5 |
|
So we're saying of all of the messages, we want to list them, right? |
|
now for every one of those 32 by 3 integers we've retrieved the embedding |
|
so let's create w2 and v2 |
|
train the model so first what I'm going to do is I'm going to create a pyour |
|
All right, so just like I
just consumed my burrito, we are going to consume that
API that we just created. |
|
so that's telling us about the gradient information and we can use this gradient |
|
document and from uh M chains import uh load Vector DB and also load embeddings |
|
I guess let's go ahead and so
we'll I can explore products. There's way more products that |
|
the exact number of slope will be exact amount of slope is negative 3. |
|
if a car travels twice as fast as bicycle and the bicycle is four times as |
|
because if the session key is not new session then it's already a Tim stamp so |
|
two columns right so each 20 each one of 27 |
|
that the model which used for the first pass decoding that makes sense but but |
|
parameters and we need to make sure that p dot grad |
|
one file defines the GPT model the Transformer and one file trains it on |
|
value also what I want to add here is with streamlit do spinner basically uh |
|
then in the next 100 000 we're using a learning rate of 0.01 |
|
practice uh this works quite well so let's now continue the optimization |
|
building framework you can see the encoder and decoder for those of you who |
|
input data are not that useful to us and that's because the input data seems |
|
integers so we need some kind of a lookup table from characters to integers |
|
careful because um okay the learning rate we set to 0.1 |
|
plot it so draw dot of n gives us x1 times w1 x2 times w2 |
|
ensemble of sub networks and then at test time everything is fully enabled |
|
using this English models that we have here but if you'd like to use other |
|
is a much more advanced and popular Optimizer and it works extremely well |
|
And so part of the
message that we can see, we can see who it was to, |
|
and so if you pass in very positive inputs we're gonna cap it smoothly at |
|
l with respect to a because a is the one that we bumped a little bit by h |
|
maybe there's an album
that could be created. There's an album resource
that contains photos |
|
just paddings these are not something important just paddings to like fix the |
|
if it has changed since a certain time. This technique allows for caching, |
|
first here we have the causal self attention block and all of this should |
|
this style of string interpolation. So I have the event object, and on it |
|
so let's now scroll back down to this is much larger reinitialize the |
|
and so by symmetry also d d by d |
|
Data via a shared encoder decoder structure so you can see again it's the |
|
and we could go and query
this ourselves, but we don't need to because the API
has wrapped up for us. |
|
and now let's do one learning rate decay what this means is we're going to take |
|
remember n Ed is 32 instead of having one Communication |
|
hit rename I'm going to call this one first first |
|
100% sure if it would work right now maybe there will be another small error |
|
the element for example 3 comma 13 is giving us the firing rate of the 13th |
|
messages and there we can see it's a list with human message AI message then |
|
so multiply i think it won't surprise you will be fairly similar |
|
And it's this one here, it's a lightweight REST
API client for a VS code. |
|
into this expression here so let me copy paste this |
|
we calculate the loss we're not actually reinventing the wheel |
|
speech using your text welcome back everybody so as you can see that the |
|
conversational retrieval chain so conversation retrieval chain okay what |
|
to be useful later during inference because while we're sampling we can |
|
it's running out to the screen
and there's different levels that you can do and error is one of those. |
|
a list and retrieving,
we're going to create. So we're gonna go under
body and for the body |
|
so that plucks out the probabilities of that the neural network assigns to the |
|
So we now have our app here locally and we can change things. |
|
engine autograd is short for automatic gradient and really what it does is it |
|
the account SID showed up
there and then it automatically did a two URL encoded
and if you look it put |
|
propagate through your function and then you can use this as a lego block in a |
|
and this shape is the same but now this is uh it doesn't matter if |
|
web Text data set which is a fairly large data set of web pages then I |
|
derivative to all the leaf nodes sorry to all the children nodes of it |
|
this data will have changed a little bit so now this neuron |
|
But know that this exists. One of the really cool
things that Postman does is |
|
here so thef chat chain is running what else do we want let's keep it at that |
|
fashion and then the elements here in the lower triangular part are telling |
|
State and the stream chat message history takes the chat history also from |
|
So you can see here,
there's this call to action and that's what this is and
this is why I can change this |
|
and then I'm gonna bring
this back a little bit here so you can kind of see, let's change |
|
to a tensor of values that are very close to zero then we're going to get a |
|
each other might end up in a very similar part of the space and conversely |
|
is shared one and this is a it's all aggressive all right so I |
|
of them and uh whatever it says you have to write it right next to it use this uh |
|
core module we're going to import from the prompts module we're going to |
|
And it's kind of really a
quick little look at this. I don't actually even
need to do a collection. |
|
have this uh residual pathway and you are free to Fork off from the residual |
|
That's what we're trying to get. And that's what the old model and the new model of security were designed to do. |
|
chain rule here is d l by d e which we see here is |
|
I love Twilio, I mean, I love it so much, I even applied for a job
with them and I got it. |
|
in our app we have to account for that we can't use our normal chain if you |
|
a little bit here, you'll
see My Twilio Number. So go ahead and send
a text to your number. |
|
me decode B BYT from idx do shape and then here we're also going to |
|
a little bit more and then
you'll see that there's always this Spotify URL that allows
you to see what it was. |
|
will emit two vectors it will emit a query and it will emit a |
|
this exercise is number one we got to practice a few more operations and uh |
|
everything and uh let's continue with the next part this part is to discard |
|
do not lr not the learning rate but the exponent |
|
completely abstracted away from me. I'm still in control of what has |
|
o is 10 h of n |
|
I think was 2.07 so it went from 2.07 all the way down to 1.48 just by scaling |
|
here line by line so for example like um text Page would be PDF file get page no |
|
something meaningful or understandable by human beings like us uh actually this |
|
so in this case the correct thing will be happening because the same bias |
|
There are more http verbs,
also known as request methods besides get, most common
scenario you see is when you |
|
Who runs the world? Girls. But I don't remember
where, what album it's on. |