audio_id
stringlengths 15
17
| audio
audioduration (s) 6.76
14.4
| raw_text
stringlengths 160
200
| normalized_text
stringlengths 160
200
|
---|---|---|---|
llm_intro_0.wav | This is unlike many other language models that you might be familiar with. For example, if you're using Chats GPT or something like that, the model architecture was never released. | This is unlike many other language models that you might be familiar with. For example, if you're using Chats GPT or something like that, the model architecture was never released. |
|
llm_intro_1.wav | Because this is a 70 billion parameter model, every one of those parameters is stored as two bytes. And so therefore, the parameters file here is 104 gigabytes. | Because this is a 70 billion parameter model, every one of those parameters is stored as two bytes. And so therefore, the parameters file here is 104 gigabytes. |
|
llm_intro_2.wav | And it's two bytes because this is a float 16 number as the data type. Now, in addition to these parameters, that's just like a large list of parameters for that neural network. | And it's two bytes because this is a float 16 number as the data type. Now, in addition to these parameters, that's just like a large list of parameters for that neural network. |
|
llm_intro_3.wav | You also need something that runs that neural network. And this piece of code is implemented in our run file. Now, this could be a C file or a Python file or any other programming language, really. | You also need something that runs that neural network. And this piece of code is implemented in our run file. Now, this could be a C file or a Python file or any other programming language, really. |
|
llm_intro_4.wav | It can be written in any arbitrary language. But C is sort of like a very simple language, just to give you a sense. And it would only require about 500 lines of C with no other dependencies. | It can be written in any arbitrary language. But C is sort of like a very simple language, just to give you a sense. And it would only require about 500 lines of C with no other dependencies. |
|
llm_intro_5.wav | It was only running a seven billion parameter model. A 70B would be running about 10 times slower, but I wanted to give you an idea of sort of just the text generation and what that looks like. | It was only running a seven billion parameter model. A 70B would be running about 10 times slower, but I wanted to give you an idea of sort of just the text generation and what that looks like. |
|
llm_intro_6.wav | So not a lot is necessary to run the model. This is a very small package. But the computational complexity really comes in when we'd like to get those parameters. | So not a lot is necessary to run the model. This is a very small package. But the computational complexity really comes in when we'd like to get those parameters. |
|
llm_intro_7.wav | Because whatever's in the run.c file, the neural network architecture and sort of the forward pass of that network, everything is algorithmically understood and open and so on. | Because whatever's in the run.c file, the neural network architecture and sort of the forward pass of that network, everything is algorithmically understood and open and so on. |
|
llm_intro_8.wav | So to obtain the parameters, basically the model training, as we call it, is a lot more involved than model inference, which is the part that I showed you earlier. | So to obtain the parameters, basically the model training, as we call it, is a lot more involved than model inference, which is the part that I showed you earlier. |
|
llm_intro_9.wav | So because LLAMA270B is an open source model, we know quite a bit about how it was trained because Meta released that information in paper. So these are some of the numbers of what's involved. | So because LLAMA270B is an open source model, we know quite a bit about how it was trained because Meta released that information in paper. So these are some of the numbers of what's involved. |
|
llm_intro_10.wav | You basically take a chunk of the internet that is roughly, you should be thinking, 10 terabytes of text. This typically comes from like a crawl of the internet. | You basically take a chunk of the internet that is roughly, you should be thinking, 10 terabytes of text. This typically comes from like a crawl of the internet. |
|
llm_intro_11.wav | So just imagine just collecting tons of text from all kinds of different websites and collecting it together. So you take a large chunk of internet, then you procure a GPU cluster. | So just imagine just collecting tons of text from all kinds of different websites and collecting it together. So you take a large chunk of internet, then you procure a GPU cluster. |
|
llm_intro_12.wav | So these parameters that I showed you in an earlier slide are best kind of thought of as like a zip file of the internet. And in this case, what would come out are these parameters, 140 gigabytes. | So these parameters that I showed you in an earlier slide are best kind of thought of as like a zip file of the internet. And in this case, what would come out are these parameters, 140 gigabytes. |
|
llm_intro_13.wav | So you can see that the compression ratio here is roughly like 100x, roughly speaking. But this is not exactly a zip file because a zip file is lossless compression. | So you can see that the compression ratio here is roughly like 100x, roughly speaking. But this is not exactly a zip file because a zip file is lossless compression. |
|
llm_intro_14.wav | What's happening here is a lossy compression. We're just kind of like getting a kind of a gestalt of the text that we trained on. We don't have an identical copy of it in these parameters. | What's happening here is a lossy compression. We're just kind of like getting a kind of a gestalt of the text that we trained on. We don't have an identical copy of it in these parameters. |
|
llm_intro_15.wav | So if you want to think about state-of-the-art neural networks, like, say, what you might use in chatGPT, or CLOD, or BARD, or something like that, these numbers are off by a factor of 10 or more. | So if you want to think about state-of-the-art neural networks, like, say, what you might use in chatGPT, or CLOD, or BARD, or something like that, these numbers are off by a factor of 10 or more. |
|
llm_intro_16.wav | OK, so what is this neural network really doing? I mentioned that there are these parameters. This neural network basically is just trying to predict the next word in a sequence. | OK, so what is this neural network really doing? I mentioned that there are these parameters. This neural network basically is just trying to predict the next word in a sequence. |
|
llm_intro_17.wav | You can think about it that way. So you can feed in a sequence of words, for example, cat sat on a. This feeds into a neural net. And these parameters are dispersed throughout this neural network. | You can think about it that way. So you can feed in a sequence of words, for example, cat sat on a. This feeds into a neural net. And these parameters are dispersed throughout this neural network. |
|
llm_intro_18.wav | And there's neurons, and they're connected to each other, and they all fire in a certain way. You can think about it that way. And out comes a prediction for what word comes next. | And there's neurons, and they're connected to each other, and they all fire in a certain way. You can think about it that way. And out comes a prediction for what word comes next. |
|
llm_intro_19.wav | So for example, in this case, this neural network might predict that in this context of four words, the next word will probably be a mat with, say, 97% probability. | So for example, in this case, this neural network might predict that in this context of four words, the next word will probably be a mat with, say, 97% probability. |
|
llm_intro_20.wav | And you can show mathematically that there's a very close relationship between prediction and compression, which is why I allude to this neural network as training it as a compression of the internet. | And you can show mathematically that there's a very close relationship between prediction and compression, which is why I allude to this neural network as training it as a compression of the internet. |
|
llm_intro_21.wav | And so think about being the neural network, and you're given some amount of words and trying to predict the next word in a sequence. Well, in this case, I'm highlighting here in red | And so think about being the neural network, and you're given some amount of words and trying to predict the next word in a sequence. Well, in this case, I'm highlighting here in red |
|
llm_intro_22.wav | some of the words that would contain a lot of information. And so, for example, if your objective is to predict the next word, presumably your parameters have to learn a lot of this knowledge. | some of the words that would contain a lot of information. And so, for example, if your objective is to predict the next word, presumably your parameters have to learn a lot of this knowledge. |
|
llm_intro_23.wav | And so, in the task of next word prediction, you're learning a ton about the world, and all this knowledge is being compressed into the weights, the parameters. | And so, in the task of next word prediction, you're learning a ton about the world, and all this knowledge is being compressed into the weights, the parameters. |
|
llm_intro_24.wav | We basically generate what comes next, we sample from the model, so we pick a word, and then we continue feeding it back in and get the next word, and continue feeding that back in. | We basically generate what comes next, we sample from the model, so we pick a word, and then we continue feeding it back in and get the next word, and continue feeding that back in. |
|
llm_intro_25.wav | So for example, if we just run the neural network, or as we say, perform inference, we would get sort of like web page dreams. You can almost think about it that way, right? | So for example, if we just run the neural network, or as we say, perform inference, we would get sort of like web page dreams. You can almost think about it that way, right? |
|
llm_intro_26.wav | Because this network was trained on web pages, and then you can sort of like let it loose. So on the left, we have some kind of a Java code dream, it looks like. | Because this network was trained on web pages, and then you can sort of like let it loose. So on the left, we have some kind of a Java code dream, it looks like. |
|
llm_intro_27.wav | In the middle, we have some kind of what looks like almost like an Amazon product dream. And on the right, we have something that almost looks like a Wikipedia article. | In the middle, we have some kind of what looks like almost like an Amazon product dream. And on the right, we have something that almost looks like a Wikipedia article. |
|
llm_intro_28.wav | The model network just knows that what comes after ISBN colon is some kind of a number of roughly this length, and it's got all these digits, and it just like puts it in. | The model network just knows that what comes after ISBN colon is some kind of a number of roughly this length, and it's got all these digits, and it just like puts it in. |
|
llm_intro_29.wav | It just kind of like puts in whatever looks reasonable. So it's parroting the training dataset distribution. On the right, the black nose dace, I looked it up, and it is actually a kind of fish. | It just kind of like puts in whatever looks reasonable. So it's parroting the training dataset distribution. On the right, the black nose dace, I looked it up, and it is actually a kind of fish. |
|
llm_intro_30.wav | And what's happening here is this text verbatim is not found in the training set documents. But this information, if you actually look it up, is actually roughly correct with respect to this fish. | And what's happening here is this text verbatim is not found in the training set documents. But this information, if you actually look it up, is actually roughly correct with respect to this fish. |
|
llm_intro_31.wav | But again, it's some kind of a lossy compression of the internet. It kind of remembers the gestalt. It kind of knows the knowledge. And it just kind of like goes. | But again, it's some kind of a lossy compression of the internet. It kind of remembers the gestalt. It kind of knows the knowledge. And it just kind of like goes. |
|
llm_intro_32.wav | But for the most part, this is just kind of like hallucinating or like dreaming internet text from its data distribution. Okay, let's now switch gears to how does this network work? | But for the most part, this is just kind of like hallucinating or like dreaming internet text from its data distribution. Okay, let's now switch gears to how does this network work? |
|
llm_intro_33.wav | Now, what's remarkable about this neural net is we actually understand in full detail the architecture. We know exactly what mathematical operations happen at all the different stages of it. | Now, what's remarkable about this neural net is we actually understand in full detail the architecture. We know exactly what mathematical operations happen at all the different stages of it. |
|
llm_intro_34.wav | The problem is that these 100 billion parameters are dispersed throughout the entire neural network. So basically, these billions of parameters are throughout the neural net. | The problem is that these 100 billion parameters are dispersed throughout the entire neural network. So basically, these billions of parameters are throughout the neural net. |
|
llm_intro_35.wav | And all we know is how to adjust these parameters iteratively to make the network as a whole better at the next word prediction task. So we know how to optimize these parameters. | And all we know is how to adjust these parameters iteratively to make the network as a whole better at the next word prediction task. So we know how to optimize these parameters. |
|
llm_intro_36.wav | So we kind of understand that they build and maintain some kind of a knowledge database, but even this knowledge database is very strange and imperfect and weird. | So we kind of understand that they build and maintain some kind of a knowledge database, but even this knowledge database is very strange and imperfect and weird. |
|
llm_intro_37.wav | It will tell you it's Mary Lee Pfeiffer, which is correct. But if you say, who is merely Pfeiffer's son, it will tell you it doesn't know. So this knowledge is weird and it's kind of one-dimensional. | It will tell you it's Mary Lee Pfeiffer, which is correct. But if you say, who is merely Pfeiffer's son, it will tell you it doesn't know. So this knowledge is weird and it's kind of one-dimensional. |
|
llm_intro_38.wav | And you have to sort of like, this knowledge isn't just like stored and can be accessed in all the different ways. You have to sort of like ask it from a certain direction almost. | And you have to sort of like, this knowledge isn't just like stored and can be accessed in all the different ways. You have to sort of like ask it from a certain direction almost. |
|
llm_intro_39.wav | And so that's really weird and strange. And fundamentally, we don't really know because all you can kind of measure is whether it works or not and with what probability. | And so that's really weird and strange. And fundamentally, we don't really know because all you can kind of measure is whether it works or not and with what probability. |
|
llm_intro_40.wav | But right now we kind of treat them mostly as empirical artifacts. We can give them some inputs and we can measure the outputs. We can basically measure their behavior. | But right now we kind of treat them mostly as empirical artifacts. We can give them some inputs and we can measure the outputs. We can basically measure their behavior. |
|
llm_intro_41.wav | And so I think this requires basically correspondingly sophisticated evaluations to work with these models because they're mostly empirical. So now let's go to how we actually obtain an assistant. | And so I think this requires basically correspondingly sophisticated evaluations to work with these models because they're mostly empirical. So now let's go to how we actually obtain an assistant. |
|
llm_intro_42.wav | And this is where we obtain what we call an assistant model, because we don't actually really just want document generators. That's not very helpful for many tasks. | And this is where we obtain what we call an assistant model, because we don't actually really just want document generators. That's not very helpful for many tasks. |
|
llm_intro_43.wav | And the way you obtain these assistant models is fundamentally through the following process. We basically keep the optimization identical, so the training will be the same. | And the way you obtain these assistant models is fundamentally through the following process. We basically keep the optimization identical, so the training will be the same. |
|
llm_intro_44.wav | It's just a next word prediction task. But we're going to swap out the data set on which we are training. So it used to be that we are trying to train on internet documents. | It's just a next word prediction task. But we're going to swap out the data set on which we are training. So it used to be that we are trying to train on internet documents. |
|
llm_intro_45.wav | We're going to now swap it out for data sets that we collect manually. And the way we collect them is by using lots of people. So typically, a company will hire people. | We're going to now swap it out for data sets that we collect manually. And the way we collect them is by using lots of people. So typically, a company will hire people. |
|
llm_intro_46.wav | So we swap out the dataset now, and we train on these Q&A documents. And this process is called fine-tuning. Once you do this, you obtain what we call an assistant model. | So we swap out the dataset now, and we train on these Q&A documents. And this process is called fine-tuning. Once you do this, you obtain what we call an assistant model. |
|
llm_intro_47.wav | So this assistant model now subscribes to the form of its new training documents. So for example, if you give it a question like, can you help me with this code? | So this assistant model now subscribes to the form of its new training documents. So for example, if you give it a question like, can you help me with this code? |
|
llm_intro_48.wav | In the pre-training stage, you get a ton of text from the internet. You need a cluster of GPUs. So these are special purpose computers for these kinds of parallel processing workloads. | In the pre-training stage, you get a ton of text from the internet. You need a cluster of GPUs. So these are special purpose computers for these kinds of parallel processing workloads. |
|
llm_intro_49.wav | This is not just things that you can buy and best buy. These are very expensive computers. And then you compress the text into this neural network, into the parameters of it. | This is not just things that you can buy and best buy. These are very expensive computers. And then you compress the text into this neural network, into the parameters of it. |
|
llm_intro_50.wav | This would only potentially take like one day or something like that instead of a few months or something like that. And you obtain what we call an assistant model. | This would only potentially take like one day or something like that instead of a few months or something like that. And you obtain what we call an assistant model. |
|
llm_intro_51.wav | Then you run a lot of evaluations, you deploy this, and you monitor, collect misbehaviors. And for every misbehavior, you want to fix it. And you go to step on and repeat. | Then you run a lot of evaluations, you deploy this, and you monitor, collect misbehaviors. And for every misbehavior, you want to fix it. And you go to step on and repeat. |
|
llm_intro_52.wav | So you take that, and you ask a person to fill in the correct response. And so the person overwrites the response with the correct one, and this is then inserted as an example into your training data. | So you take that, and you ask a person to fill in the correct response. And so the person overwrites the response with the correct one, and this is then inserted as an example into your training data. |
|
llm_intro_53.wav | Because fine-tuning is a lot cheaper, you can do this every week, every day, or so on, and companies often will iterate a lot faster on the fine-tuning stage instead of the pre-training stage. | Because fine-tuning is a lot cheaper, you can do this every week, every day, or so on, and companies often will iterate a lot faster on the fine-tuning stage instead of the pre-training stage. |
|
llm_intro_54.wav | One other thing to point out is, for example, I mentioned the Lama 2 series. The Lama 2 series actually, when it was released by Meta, contains both the base models and the assistant models. | One other thing to point out is, for example, I mentioned the Lama 2 series. The Lama 2 series actually, when it was released by Meta, contains both the base models and the assistant models. |
|
llm_intro_55.wav | If you give it questions, it will just give you more questions, or it will do something like that, because it's just an internet document sampler. So these are not super helpful. | If you give it questions, it will just give you more questions, or it will do something like that, because it's just an internet document sampler. So these are not super helpful. |
|
llm_intro_56.wav | Now see how in stage two I'm saying and or comparisons? I would like to briefly double click on that because there's also a stage three of fine-tuning that you can optionally go to or continue to. | Now see how in stage two I'm saying and or comparisons? I would like to briefly double click on that because there's also a stage three of fine-tuning that you can optionally go to or continue to. |
|
llm_intro_57.wav | The reason that we do this is that in many cases it is much easier to compare candidate answers than to write an answer yourself if you're a human labeler. So consider the following concrete example. | The reason that we do this is that in many cases it is much easier to compare candidate answers than to write an answer yourself if you're a human labeler. So consider the following concrete example. |
|
llm_intro_58.wav | Suppose that the question is to write a haiku about paperclips or something like that. From the perspective of a labeler, if I'm asked to write a haiku, that might be a very difficult task, right? | Suppose that the question is to write a haiku about paperclips or something like that. From the perspective of a labeler, if I'm asked to write a haiku, that might be a very difficult task, right? |
|
llm_intro_59.wav | Well, then as a labeler, you could look at these haikus and actually pick the one that is much better. And so in many cases, it is easier to do the comparison instead of the generation. | Well, then as a labeler, you could look at these haikus and actually pick the one that is much better. And so in many cases, it is easier to do the comparison instead of the generation. |
|
llm_intro_60.wav | And there's a stage three of fine-tuning that can use these comparisons to further fine-tune the model. And I'm not going to go into the full mathematical detail of this. | And there's a stage three of fine-tuning that can use these comparisons to further fine-tune the model. And I'm not going to go into the full mathematical detail of this. |
|
llm_intro_61.wav | I also wanted to show you very briefly one slide showing some of the labeling instructions that we give to humans. So this is an excerpt from the paper InstructGPT by OpenAI. | I also wanted to show you very briefly one slide showing some of the labeling instructions that we give to humans. So this is an excerpt from the paper InstructGPT by OpenAI. |
|
llm_intro_62.wav | One more thing that I wanted to mention is that I've described the process naively as humans doing all of this manual work, but that's not exactly right, and it's increasingly less correct. | One more thing that I wanted to mention is that I've described the process naively as humans doing all of this manual work, but that's not exactly right, and it's increasingly less correct. |
|
llm_intro_63.wav | And so for example, you can get these language models to sample answers, and then people sort of like cherry pick parts of answers to create one sort of single best answer. | And so for example, you can get these language models to sample answers, and then people sort of like cherry pick parts of answers to create one sort of single best answer. |
|
llm_intro_64.wav | Or you can ask these models to try to check your work, or you can try to ask them to create comparisons, and then you're just kind of like in an oversight role over it. | Or you can ask these models to try to check your work, or you can try to ask them to create comparisons, and then you're just kind of like in an oversight role over it. |
|
llm_intro_65.wav | Okay, finally, I wanted to show you a leaderboard of the current leading large language models out there. So this, for example, is the Chatbot Arena. It is managed by a team at Berkeley. | Okay, finally, I wanted to show you a leaderboard of the current leading large language models out there. So this, for example, is the Chatbot Arena. It is managed by a team at Berkeley. |
|
llm_intro_66.wav | And what they do here is they rank the different language models by their ELO rating. And the way you calculate ELO is very similar to how you would calculate it in chess. | And what they do here is they rank the different language models by their ELO rating. And the way you calculate ELO is very similar to how you would calculate it in chess. |
|
llm_intro_67.wav | So different chess players play each other, and depending on the win rates against each other, you can calculate their ELO scores. You can do the exact same thing with language models. | So different chess players play each other, and depending on the win rates against each other, you can calculate their ELO scores. You can do the exact same thing with language models. |
|
llm_intro_68.wav | So you can go to this website, you enter some question, you get responses from two models, and you don't know what models they were generated from, and you pick the winner. | So you can go to this website, you enter some question, you get responses from two models, and you don't know what models they were generated from, and you pick the winner. |
|
llm_intro_69.wav | And then depending on who wins and who loses, you can calculate the ELO scores. So the higher, the better. So what you see here is that crowding up on the top, you have the proprietary models. | And then depending on who wins and who loses, you can calculate the ELO scores. So the higher, the better. So what you see here is that crowding up on the top, you have the proprietary models. |
|
llm_intro_70.wav | These are closed models. You don't have access to the weights. They are usually behind a web interface. And this is GPT series from OpenAI and the Cloud series from Anthropic. | These are closed models. You don't have access to the weights. They are usually behind a web interface. And this is GPT series from OpenAI and the Cloud series from Anthropic. |
|
llm_intro_71.wav | But roughly speaking, what you're seeing today in the ecosystem is that the closed models work a lot better, but you can't really work with them, fine tune them, download them, et cetera. | But roughly speaking, what you're seeing today in the ecosystem is that the closed models work a lot better, but you can't really work with them, fine tune them, download them, et cetera. |
|
llm_intro_72.wav | And all of this stuff works worse, but depending on your application, that might be good enough. And so currently I would say the open source ecosystem is trying to boost performance | And all of this stuff works worse, but depending on your application, that might be good enough. And so currently I would say the open source ecosystem is trying to boost performance |
|
llm_intro_73.wav | Okay, so now I'm going to switch gears and we're going to talk about the language models, how they're improving, and where all of it is going in terms of those improvements. | Okay, so now I'm going to switch gears and we're going to talk about the language models, how they're improving, and where all of it is going in terms of those improvements. |
|
llm_intro_74.wav | So if you train a bigger model on more text, we have a lot of confidence that the next word prediction task will improve. So algorithmic progress is not necessary. | So if you train a bigger model on more text, we have a lot of confidence that the next word prediction task will improve. So algorithmic progress is not necessary. |
|
llm_intro_75.wav | And algorithmic progress is kind of like a nice bonus and a lot of these organizations invest a lot into it. But fundamentally the scaling kind of offers one guaranteed path to success. | And algorithmic progress is kind of like a nice bonus and a lot of these organizations invest a lot into it. But fundamentally the scaling kind of offers one guaranteed path to success. |
|
llm_intro_76.wav | And instead of speaking in abstract terms, I'd like to work with a concrete example that we can sort of step through. So I went to ChessGPT and I gave the following query. | And instead of speaking in abstract terms, I'd like to work with a concrete example that we can sort of step through. So I went to ChessGPT and I gave the following query. |
|
llm_intro_77.wav | So in this case, a very reasonable tool to use would be, for example, the browser. So if you and I were faced with the same problem, you would probably go off and you would do a search, right? | So in this case, a very reasonable tool to use would be, for example, the browser. So if you and I were faced with the same problem, you would probably go off and you would do a search, right? |
|
llm_intro_78.wav | And that's exactly what ChachiPT does. So it has a way of emitting special words that we can sort of look at and we can basically look at it trying to perform a search. | And that's exactly what ChachiPT does. So it has a way of emitting special words that we can sort of look at and we can basically look at it trying to perform a search. |
|
llm_intro_79.wav | It works very similar to how you and I would do research using browsing. And it organizes this into the following information. And it sort of responds in this way. | It works very similar to how you and I would do research using browsing. And it organizes this into the following information. And it sort of responds in this way. |
|
llm_intro_80.wav | So it's collected the information. We have a table. We have series A, B, C, D, and E. We have the date, the amount raised, and the implied valuation in the series. | So it's collected the information. We have a table. We have series A, B, C, D, and E. We have the date, the amount raised, and the implied valuation in the series. |
|
llm_intro_81.wav | On the bottom, it said that, actually, I apologize, I was not able to find the series A and B valuations. It only found the amounts raised. So you see how there's a not available in the table. | On the bottom, it said that, actually, I apologize, I was not able to find the series A and B valuations. It only found the amounts raised. So you see how there's a not available in the table. |
|
llm_intro_82.wav | So, okay, we can now continue this kind of interaction. So I said, okay, let's try to guess or impute the valuation for series A and B based on the ratios we see in series C, D, and E. | So, okay, we can now continue this kind of interaction. So I said, okay, let's try to guess or impute the valuation for series A and B based on the ratios we see in series C, D, and E. |
|
llm_intro_83.wav | That would be very complicated because you and I are not very good at math. In the same way, ChachiPT, just in its head sort of, is not very good at math either. | That would be very complicated because you and I are not very good at math. In the same way, ChachiPT, just in its head sort of, is not very good at math either. |
|
llm_intro_84.wav | I'm saying the x-axis is the date and the y-axis is the valuation of ScaleAI. Use logarithmic scale for y-axis, make it very nice, professional, and use gridlines. | I'm saying the x-axis is the date and the y-axis is the valuation of ScaleAI. Use logarithmic scale for y-axis, make it very nice, professional, and use gridlines. |
|
llm_intro_85.wav | And so now we're looking at this and we'd like to do more tasks. So for example, let's now add a linear trend line to this plot, and we'd like to extrapolate the valuation to the end of 2025. | And so now we're looking at this and we'd like to do more tasks. So for example, let's now add a linear trend line to this plot, and we'd like to extrapolate the valuation to the end of 2025. |
|
llm_intro_86.wav | And ChatGPT goes off, writes all of the code, not shown, and sort of gives the analysis. So on the bottom, we have the date, we've extrapolated, and this is the valuation. | And ChatGPT goes off, writes all of the code, not shown, and sort of gives the analysis. So on the bottom, we have the date, we've extrapolated, and this is the valuation. |
|
llm_intro_87.wav | So based on this fit, today's valuation is $150 billion, apparently, roughly. And at the end of 2025, Scale.ai is expected to be a $2 trillion company. So congratulations to the team. | So based on this fit, today's valuation is $150 billion, apparently, roughly. And at the end of 2025, Scale.ai is expected to be a $2 trillion company. So congratulations to the team. |
|
llm_intro_88.wav | In this case, this tool is DALI, which is also a tool developed by OpenAI. It takes natural language descriptions and it generates images. Here, DALI was used as a tool to generate this image. | In this case, this tool is DALI, which is also a tool developed by OpenAI. It takes natural language descriptions and it generates images. Here, DALI was used as a tool to generate this image. |
|
llm_intro_89.wav | We use tons of tools, we find computers very useful, and the exact same is true for large language models, and this is increasingly a direction that is utilized by these models. | We use tons of tools, we find computers very useful, and the exact same is true for large language models, and this is increasingly a direction that is utilized by these models. |
|
llm_intro_90.wav | Okay, so I've shown you here that ChatGPT can generate images. Now, multimodality is actually like a major axis along which large language models are getting better. | Okay, so I've shown you here that ChatGPT can generate images. Now, multimodality is actually like a major axis along which large language models are getting better. |
|
llm_intro_91.wav | So in this famous demo from Greg Brockman, one of the founders of OpenAI, he showed ChatGPT a picture of a little MyJoke website diagram that he just, you know, sketched out with a pencil. | So in this famous demo from Greg Brockman, one of the founders of OpenAI, he showed ChatGPT a picture of a little MyJoke website diagram that he just, you know, sketched out with a pencil. |
|
llm_intro_92.wav | You can go to this MyJoke website, and you can see a little joke, and you can click to reveal a punchline. And this just works. So it's quite remarkable that this works. | You can go to this MyJoke website, and you can see a little joke, and you can click to reveal a punchline. And this just works. So it's quite remarkable that this works. |
|
llm_intro_93.wav | And fundamentally, you can basically start plugging images into the language models alongside with text. And ChatJPT is able to access that information and utilize it. | And fundamentally, you can basically start plugging images into the language models alongside with text. And ChatJPT is able to access that information and utilize it. |
|
llm_intro_94.wav | Now, I mentioned that the major axis here is multimodality, so it's not just about images, seeing them and generating them, but also, for example, about audio. So, ChatGPT can now both hear and speak. | Now, I mentioned that the major axis here is multimodality, so it's not just about images, seeing them and generating them, but also, for example, about audio. So, ChatGPT can now both hear and speak. |
|
llm_intro_95.wav | Okay, so now I would like to switch gears to talking about some of the future directions of development in larger language models that the field broadly is interested in. | Okay, so now I would like to switch gears to talking about some of the future directions of development in larger language models that the field broadly is interested in. |
|
llm_intro_96.wav | It's just some of the things that people are thinking about. The first thing is this idea of system 1 versus system 2 type of thinking that was popularized by this book, Thinking Fast and Slow. | It's just some of the things that people are thinking about. The first thing is this idea of system 1 versus system 2 type of thinking that was popularized by this book, Thinking Fast and Slow. |
|
llm_intro_97.wav | So what is the distinction? The idea is that your brain can function in two kind of different modes. The system 1 thinking is your quick, instinctive, and automatic sort of part of the brain. | So what is the distinction? The idea is that your brain can function in two kind of different modes. The system 1 thinking is your quick, instinctive, and automatic sort of part of the brain. |
|
llm_intro_98.wav | So for example, if I ask you, what is 2 plus 2? You're not actually doing that math. You're just telling me it's 4, because it's available. It's cached. It's instinctive. | So for example, if I ask you, what is 2 plus 2? You're not actually doing that math. You're just telling me it's 4, because it's available. It's cached. It's instinctive. |
|
llm_intro_99.wav | You have to work out the problem in your head and give the answer. Another example is if some of you potentially play chess, when you're doing speed chess, you don't have time to think. | You have to work out the problem in your head and give the answer. Another example is if some of you potentially play chess, when you're doing speed chess, you don't have time to think. |
End of preview. Expand
in Dataset Viewer.
README.md exists but content is empty.
Use the Edit dataset card button to edit it.
- Downloads last month
- 37