text
stringclasses
3 values
So, I'm now super thrilled to introduce to you a legendary person, someone who has been hacking and educating at the forefront of AI for over a decade. From neural networks to computer vision, from natural language processing to reinforcement learning, he has pushed the boundaries and inspired millions all over the world, including I think all of us here. He is a distinguished machine learning superstar, a founding member of OpenAI, the reference human for ImageNet, an ex-Google Brain, ex-DeepMind, ex-Tesla, Mr Autopilot, he has really seen it all. And some months ago, on a memorable day, this special person joined the Cuda Mode Discord to start hacking with others on llm.c, which became one of the greatest and most active community projects on our server. But I guess it's best if he tells the story himself. So please join me in welcoming the incredible one and only, Andrej Karpathy! Wow. Okay. Very impressive. Okay, yeah, I'm very excited to be here. This is my favorite kind of event to present at. So yeah, thank you for the invitation and thank you for running Cuda Mode and putting this on. This is like a wonderful event. Okay. So I'll tell you a bit about LLM.C. So what are we doing? We're training transformers in C and a pinch of C++. Okay, so I'd like to tell the story a little bit of how this project came about and what this looks like from my perspective. So roughly a year ago, I was trying to add a video to my YouTube series and I was trying to teach people LLM training, GPT training and so on. And I was basically hacking on nanoGPT trying to get it to work. So that was me. And then, you've all worked with PyTorch, of course, right? So the trickiness comes that, okay, you have your model, which you've written, that makes sense. But now you have to keep track of a number of abstractions here at the same time. So you have to put it to a device. You want to compile it. You want to wrap it in DDP. And suddenly things start to be a little bit more complicated because I'm not even sure like in what order do you do these? What exactly happens? What are these abstractions? What do they do to your model? So I don't fully understand how any of this works. And then what happens is you want to use your model in different ways. So you want to use it in evaluation, in training, or model inference and so on. And what happened to me is that I was able to train the model, but for some reason eval and inference was not working. And what happened was I was getting some kind of a torch compile error when I was trying to run my eval and my inference. And this is just an illustrative example of a torch compile error. It was something else. I don't remember. I didn't capture it. But both of them were giving me error, inference and eval, and a different error, and I had no idea what was going on. So I did what anyone would do in my position. I went to torch discuss. And I'm looking for PTR BLCK to solve my issue. Unfortunately, PTR BLCK did not have any guidance that I could see on that specific error. So I was kind of stuck, honestly. So two hours later of fighting with torch compile and trying to figure out what the hell was going on, I'm kind of a sad panda. I don't know exactly how to solve this. And so I felt like I was going through these, stages of grief. In the beginning, I was in denial. I was like, this can't be happening to me. I'm not doing anything crazy. I'm just training a little GPT. Like why is this not working? Because this seems really simple. I'm not doing anything crazy. And then eventually I entered the stage of anger. And I was like, okay. You know what? I'm just going to write the whole thing. I understand in my mind what I'm trying to do. Like the computation itself, the algorithm itself is totally clear in my mind. And for some reason, torch compile doesn't let me like use it, run it, et cetera. So I felt a little bit powerless. And I was like, okay. I'm going to take life into my own hands and be in control of my destiny. I'm going to just write this in c, how bad could it be. So let's think about like what is PyTorch offering you, really? And there's many things, but maybe some of the things that are relevant here. I don't know why those bullet points are one on one. I don't know what, on my slides is totally fine, so I don't know what conversion happened here. Okay. But number one, we're getting an array, right? So a very useful n-dimensional array that we can manipulate the operations. If we're going to abandon this, we're going to have to do a lot of pointer arithmetic, basically, making sure that we ravel and unravel indices correctly. Second, we're getting autograd for free. So if we don't have autograd, we need to do forward and backward passes of all the layers. We don't have device, so we have to worry about memory being on the host or on the device and shoveling memory around your different devices between CPU and GPU and So on. We don't have simple dtype conversions, so we have to be very mindful what tensors are stored in what precisions and convert explicitly between them. We don't have torch compile, so we're going to have to do all the kernel fusions that we want manually, and we're going to have to optimize for space and time performance manually. And finally, we don't have distributed, so we're going to have to manually spin up all of our processes, make sure that they can find each other, communicate with nccl, etc. So PyTorch is really, really nice, and this is just some of the things that PyTorch offers. So without PyTorch, we're kind of naked in the world, right? But maybe it's okay. So yeah, how bad could it be? So step one, we have our PyTorch code, which now isn't the primary thing we're working with. It's only a reference that we check correctness with respect to. And so we're in PyTorch land. Everything is nice and clean. We have a little transformer, a few modules, and we're just calling them, so everything is great. And that now becomes our reference in PyTorch. I'd like to just take you through one example of a layer. So for example, layernorm here is like a PyTorch layer, and we'd like to basically port this over to C. So what kind of process do we go through? Well, we're going to iterate through all the layers. Number one, we need the forward pass, and actually I had to write a forward pass of layernorm because PyTorch doesn't just have this kind of implementation in PyTorch of layernorm because it's kind of like a block, and eventually it calls into some CUDA kernels. So I had to write the forward pass of layernorm and make sure it's equivalent to the layernorm in PyTorch. And then, of course, I had to write the backward pass of layernorm. This is where you kind of take out your pen and paper, do some back prop. This is for batchnorm, but layernorm would be similar. And yeah, we have to write the backward pass. And again, this is all still in PyTorch, but it's explicit, and you're just making sure that layer norm of PyTorch, forward and backward, matches this representation, this basically manual, tensor-based implementation. So now we have PyTorch code, forward, backward. So next thing we do is we try to port it to C. And this is actually a lot simpler in many cases than you might think. So on the left, we have the PyTorch code, and on the right, we basically have the equivalent layer norm forward in C. And it's not that crazy, right? So unlike in PyTorch, we just have a bunch of float* arrays. So we have a float* out, float* inputs, outputs, means, standard deviations, weights and biases, and some high parameters. And one thing I really like to do in llm.c is I just want to keep things simple. I don't want to create a tensor abstraction. I don't want to create any abstraction, really. It's just float arrays and operations on float arrays. Like why should it be a lot more complicated than that? So everything is just float arrays. Everything is fully self-contained. There's no underlying representations, abstractions to call, import, et cetera. This is the layernorm forward on float arrays, and that's it. So that's the forward, and then you also do the backward for all the layers. Once we've done that for all the layers and converted everything to C and made sure that everything matches our reference implementation. We have to start to string it together. So we go into our C code in main, and we have to allocate all of the memory that we're going to be using. In llm.c, all of the allocation happens a single time at the beginning. So we pre-plan all of the memory that we're going to ever use. Then it's fixed. And from then on, it's just dynamics of just feeding data through it and training the model. So we have to pre-plan all of the tensors, their sizes. And we have to do that for the parameters. And we have the data grad and the m and v for the AdamW buffers. And then for the activations as well. And we need space for both data and grad. And so you just pre-plan all of the memory. You allocate all of it. And then we need to stitch it all up. So we have all of these layers, and they have a forward and a backward pass in back propagation. And so on the forward pass, just kind of like you allocate all these tensors, and you're very careful, and you index into them properly, and you make sure everything flows correctly through. And you just call the forwards and then all the backwards. And then you're kind of done, and you're left with gradient, and you can do an update. So stringing that together is the second piece of work. And then once we've sort of strung it together, you get something that you can just compile and run. So on the top left is everything that's required. We download a starter pack, which is really just the GPT-2 weights in a single binary file. Very simple. And also we need the data set, in this case, tiny Shakespeare, and the tokenizer and stuff like that. And then we just compile and run this little C code file. It's a single file of C at this point. And I think it's like 2,000 lines or something like that, if I remember correctly. And you run that program, and it like does a little training and outputs some Shakespeare at the end. And then we can verify that the PyTorch code is identical to the C code, and everything is great. We're just running in C. And at this point, I'm actually feeling quite great, because this is amazing. So we have a single file of C. There's no dependencies whatsoever. It compiles instantly. It runs instantly. All of the memory is just allocated in a single blob. So if you start stepping, there's no way you're going to oom later. It's all preplanned. It's fully deterministic. It, in principle, can train GPT-2. It's complete. It will train GPT-2. You just have to wait a long time. And it can run on a potato. It can just run on anything. It's just a single file of C with no dependencies. And in principle, this could run, this would be a great candidate to run on a von Neumann probe, because in space, if we just harden it a little bit more, because you're not going to ship PyTorch code on a von Neumann probe. But I think llm.c is a great candidate for that. So I was feeling great at this point. A fun side note, by the way, all of this work that I described so far happened on a vacation while I was jet lagged in Maldives. So basically, it's perfect, because you wake up at 1 a.m. and there's nothing to do. So you write stuff like LLM.C. And then in sunrise, you go do all the weather activities. So that is the villa where most of LLM.C was trained. So that was perfect. This is a picture of it. And this is a, this is I think the moon is about to set. And the sunrise is about to happen. This is a recommended way to do software development. Okay. So now we have C code, but it's inefficient. So we'd like to run it faster. For that, we reach for GPUs. So we need to convert all of our C code to GPU. So this is where we go to the dev CUDA part of the repo, and we start to develop all the kernels. So here's the layernorm forward pass, as I mentioned. And now we're going to develop a number of kernels that have the identical functionality, but now run on the GPU, and they're going to be faster. And so usually, we have versions one, two, three, four, five, six, et cetera. And these are all different kernel implementations. They're a bit faster usually over time, but they match the specification exactly and give the exact same numbers. So we develop all those layers and port them to CUDA. And this is I don't know what this is. I'm going to skip that. It's like one of the kernels. Basically, the point here is the first kernel is trivial to do usually because you're parallelizing over batch and time, and then you're basically copy pasting the C code into your CUDA kernel. And you're already getting speedups because you're parallelizing over the batch time tokens, and each thread just handles a single output element. So the first kernel is usually trivial, but then optimizations can be pretty elaborate. So by the end, we get to kernel six, for example, in layer norm, and we're doing a lot of things that are a bit more complicated. So we have some, you know, WarpRoduce operations, we have some, we also communicate through shared memory through global memory, we're orchestrating it correctly. Cache streaming hints, and a bunch of little tips and tricks for dealing with everything. And I'm going to go into a bit more detail later. But this. You can get arbitrarily complicated here writing the CUDA code. One thing that I sort of found in this project is that it's not exactly trivial to learn CUDA, unfortunately. And it was like a little bit harder than I expected. I knew some CUDA going in, but getting better at it, I think is not trivial. I think some of these books, unfortunately, are a bit out of date, as you might know. PMPP is actually quite good. But also, I think still kind of like, mostly on the beginner level, because a lot of the CUDA code that we ended up developing in the lifetime of the llm.c project, you would not find those things in this book, actually. So a lot of the kernels that we ended up adding would just not be covered. And then on top of that, you have this CUDA C++ programming guide, which frankly is not exactly readable for someone who is like a bit new to that, to CUDA. And then you have this amazing blog post from Simon, who's at Anthropic, that is like way better than anything we deserve just like randomly on the internet. So that was incredible. And if there was just more of that, that would be so amazing. But yeah. So I think I found it a little bit difficult, but I mean, I'm hoping that things like CUDA mode can definitely speed up the availability of writing CUDA. Okay, so next what happened is I was basically struggling with the CUDA code a little bit, and I was reading through the book, and I was implementing all these CUDA kernels, and they're like okay CUDA kernels, but they're not great. And so a team of Avengers assembled from the internet when they saw that CN started contributing, so specifically Eric, Arun, Alexa, are kind of like I would say core devs of LLM.C and have contributed a ton of work to LLM.C. And they started to like really optimize and write all these kernels, and this was incredible to watch and learn a lot from. And there's many more, Ross Wheeler and Chinthesil and a few others. Over time we have 60 contributors to the LLM.C project. Shout out to Lambda for sponsoring LLM.C. They contribute compute so that we can run and optimize all these kernels. So it was amazing for me that people just came from the internet and helped out on the project. And this is one of the favorite things that can happen, my favorite things that can happen with an open source MIT-licensed repo. People just come from the internet and help contribute. It's amazing. Okay, so we've converted all the layers to CUDA. We have now all the kernels, and we can now train on a single GPU in FP32 so far. So that's great. So from then on we start to make more and more optimizations. So number one, we don't want to have matmuls in FP32 when you roll your own code. We actually switched to CUBLAS. Step two, we don't want to write our own flash attention. I think that would be pretty complicated. It turns out CUDNN has a very good flash attention implementation, so we switched to that. Next, you want to definitely reach for mixed precision, so that to speed up the code. So you want to go over all your tensors for parameters and also for activations and so on, and you have to start to think about, okay, which ones are in float 32, which ones are in bfloat 16, and what precision are they in, and then do all the conversions automatically. So we reached for that and implemented that. There's many, many other optimizations that we ended up implementing over time. So as an example, we did all the kernel fusions, different recompute settings to recompute a piece of the forward pass during the backward. We, there's been a lot of optimizations from Eric, especially on minimizing the amount of memory that you need during the backward pass. We have this, like, packed 128 data structure, which basically, in our experience, forces the compiler to use the 128-bit load and store instructions that are available, but somehow the compiler is unwilling to use in many cases. So I think Arun did a lot of work here where you just look at the SAS, and you look at ,the SAS as the assembly, and you are looking at what instructions are being used for your loop, and you figure out that, okay, there should be a 128-bit load and store, but it happens to be a 32-bit or something else because something in the NVCC compiler is not going very well. So we found that this data structure kind of forces the compiler's hand a bit more. We implemented all kinds of CUDA streams to overlap. The part of the computation, and this ended up creating a total disaster. And so that's why I scratched it out, because at one point of LLM.C, as Arun would say, I basically went in and I nuked it from orbit. I just went in and I control-F for all dimensions of stream, and I just delete, delete, delete. And basically I deleted all the streams, made everything single-threaded, because we ended up getting all kinds of really weird race conditions and errors and so on, and I just didn't want to deal with it. So LLM.C is not actually as overlapped as it could be, but it's just like, too much complexity for not enough gain at this point. But maybe we can slowly reintroduce some of it. We have stochastic rounding, we have full determinism. Full determinism turns out to be pretty hard, because some of the kernels complexify a lot, because you can't use atomics. Like the encoder backward was especially crazy, because encoder backward is trivial with atomics, but non-trivial without it. Anyway, so a lot of the optimizations went into with a lot of efficiency and determinism in mind. And accuracy, like stochastic rounding and so on. Next, you want to use multiple GPUs, not just a single GPU. So this is where you bring in NCCL, you start to do AllReduce between all the different workers. And this is where you also start to reach for like sharded optimizer state, ZeRO-1. Where basically you take your optimizer states, which are in float, and these are really large buffers for AdamW, and you can actually spread out a lot of the stuff across all the GPUs, and it really helps to keep your requirements down per GPU in terms of memory. So very helpful to reach for that. So currently, LLM.C uses ZeRO-1, which is a sharded optimizer state. There's a PR for ZeRO-2, but I don't believe I merged that yet, because it gets a little bit messy, but might be merged eventually. A lot of LLM.c is just kind of like balancing the improvement in speed with the complexity of what you're actually introducing. And so I've actually rejected a lot of PRs because of that, because the code starts to get crazy, and I think that decreases the amount of people that can be onboarding the project. And then after multi-GPU, you have multi-node, so now you are running across multiple machines, you have to make sure that you synchronize all of them, that they can find each other and so on. So we implemented all that. And where that leads us to is that we can actually train GPT-2, and we can actually reproduce it after all of that work. So there's a post in the discussions of LLM.C. We can train the 1.6 billion GPT-2, which was state-of-the-art LLM as of 2019 or so, and you can train it on a single node of H100s in about 24 hours, and that costs roughly $600. And the way you do that is it's extremely dependency-free. There's no need for Python, no need for PyTorch. So you do need cudnn, which is the most heavy dependency, but cudnn is optional. So if you'd like to roll your own manual attention, that is possible in LLM.C. But cudnn is kind of like the hairiest dependency, but after that, it's just a bunch of C code. You compile it and you run it. There's no need for really anything. So there's no need for conda environments, pip installs. There's just nothing, which is amazing. And then you compile your code, then you run it, and it starts stepping. You wait 24 hours, and then it's stepping. Print some diagnostics. We have almost a 50% MFU here on one node, which is quite good. And you get really nice plots, and you beat GPT-2 on hellaswag. And basically, this just indicates that the optimization went well. No crazy numerical issues, loss spikes or anything like that for this size. And yeah, achieving a really good model in LLM.C. We can still compare to PyTorch, because remember, we have PyTorch implementation for all this stuff in parallel on the side. And so you can run the equivalent training loop almost in PyTorch, and we can compare the two implementations side by side. And in particular, at the time of writing that post, and I don't know if this has changed because the PyTorch team continues to optimize things over time, but at the time of that post, we were using in LLM.C 30% less memory, and we were 20% faster in training, just a throughput. And I don't know if I fully, super-duper optimized the PyTorch implementation. I did my personal best. But we were able to, I think, beat PyTorch in training of specifically GPT-2 in LLM.C. If you want to train anything else, you're in a lot of trouble. But you have to change your code a lot. And we're doing that, and I'll come back to it. But for GPT-2 training, we're better, after all that work. And it also compiles and runs much faster, which is beautiful. Torch compile actually takes quite a bit of time, like a minute or something. You're just waiting. So that's also something that I personally don't like to work with usually. Okay. So looping back around, turns out it wasn't all that simple. There was a lot of stuff involved, and it took a few months for a few people. But it was fun. We learned a lot, and we made friends along the way. This is the LLM.c core devs. So it was great. Ongoing work. We are adding LLAMA3 support. We actually thought maybe we would have it done by today, but there's a few more, a little bit more work to do. But we will have LLAMA3.1 training in LLM.C very, very soon. We will have FP8 support, so Arun has been working on this. And there's a big PR that's coming for FP8 support, which is also interesting. And there's a lot of notable forks of LLM.C. They're all listed on the GitHub repo. The AMD fork is very active, as far as I understand, and quite good. I think also the C++ CUDA fork is quite nice. And so a lot of forks. So I encourage you to also fork LLM.C. It's fairly readable, I think. I tried to keep it clean, well documented. I think it's pretty well understood. It's pretty well understood what's in there. It's only maybe like, I think, 3,000 lines of code of basically C mostly. And one more thought I think that I wanted to get across is it wasn't all that haphazard to start the project. I had another motivation for starting the project. And that's that I think, I mean, what is LLM.C? Like if PyTorch is, especially Torch Compile, is a bit like GCC for software 2.0, it's a compiler, then LLM.C is a bit like writing assembly. We're doing everything manually, right? Right. And basically I think we wrote LLM.C as multiple people over a duration of three months and got something that was faster than PyTorch in a specific setting of GPT-2 training. And so what this, this exercise basically proves that this is possible. Now the problem is you need to spend multiple people several months. But if LLMs are about to become much better at coding over time, then I think you can expect that the LLM could actually do this for any custom application over time. And so the LLMs could act as a kind of compiler for any custom application you're interested in. They're going to do all the LLMC work and they're going to output a binary that you can compile and run for your specific applications. So I don't actually know if we, like the use of Python and PyTorch and everything else is just a crutch because we humans are finite. We have finite knowledge, intelligence, and attention. But actually don't you want to write all code in custom CUDA kernels and so on? Like maybe. And so the other thing that I think is interesting is the LLM.C repo might be useful because in the early stages of these LLMs, and their intelligence, they might not be able to write this code from scratch if you just prompted them write GPT-2 IN C. You probably won't get LLM.C. But you're a lot more likely to get it if you put LLM.C in the context of such an LLM. And you can expect that a few-shot learning would be very helpful for the LLM to basically give it example code. And so I think LLM.C could be very useful for this example code to get to the LLMs as they're about to write all of our custom applications. And so I think this is actually not unlikely to happen. Yeah, this is kind of likely to happen. So I think software development in general will probably change a lot. And to me, LLM.C is an exploration of whether this is even possible. Because if it is possible, then maybe this is what's going to happen. So yeah, that's it. Thank you.
Hi everybody. Welcome. Thank you so much for being here. I am so glad that we're able to host this at Figma. My name is Dylan Field. I'm the CEO and cofounder of Figma. And a big welcome to everybody here, but also everyone who's joining us via live stream as well. And I'm really excited for tonight. I think this is going to be a pretty incredible conversation and I'm proud to be able to introduce the two folks who'll be having it. So first, Elad Gil. Elad is not only a dear friend and mentor of mine, but also to many in Silicon Valley and the startup community globally, and also Arthur Mensch. Arthur is a former academic who has turned CEO and cofounder of Mistral. And Mistral, for the one or two people in the room that do not know, is breaking incredible ground in open source models and I would dare say changing quite a lot about the future of AI. And with that, I'll pass it off for their Fireside Chat. Welcome. Oh, thanks. Thanks so much to Figma for hosting us, and thanks everybody for making it today. And of course to Arthur. Arthur made a heroic effort to join us where he literally had to jump out into traffic, grab a bike and bike over here. So thank you so much for coming. Discovering the US, I guess. So from a background perspective, you got your PhD in machine learning, you were a staff research scientist at DeepMind, and then you started Mistral, and you started it, I believe, with both some folks from Google, such as yourself, and then some folks from Meta and the llama project there. You folks have taken an open core approach, which I think is super interesting and we can talk about in a little bit. But I was just curious like, just to start off, what was the impetus for starting Mistral, how did you decide to do it? What were the motivations and, you know, the initial formation of the company? So ... Yeah, so I think this has always been on the mind of me, Guillaume and Timothee. So I was at DeepMind, they were at Meta, and I guess we were waiting for the hour, and the hour came with ChatGPT to some extent, so that we realized we had an opportunity to create a company pretty quickly with a good team that we could hire from day one and go and try to do speedrun a bit because we weren't starting the first. So that's what we ... that's how we got started. And for I guess the people who are probably watching the live stream, I think the people in the audience are probably well versed with what Mistral does. Can you explain a little bit about the set of products you have, the platform, you know, all the various components now. Yeah, for sure. So, we ... Mistral is actually a company building foundational models. We are the leading in open source models. So we have started the company by creating text-to-text generation models, which are really the foundational block for creating today's generative AI applications. I know we're at Figma, so we're not focusing on images yet, but this is obviously coming at some point. And so, yeah, we ... the differentiation we have is that we took this open core approach to release Mistral-7B, Mixtral-8x7B in December and build a platform on top of these open source models with the addition of commercial models that we introduced in December and then in February. So we're building an open source ... open source models and we're building a portable platform for enterprises with ... focusing on the developers and building tools for developers. How long did it take from when you founded the company to when you launched 7B? It took four months, approximately. Yeah. That's amazing. So I think one of the things that's really noticeable is the immense speed in terms of how rapidly Mistral actually launched its very first product and then the rapid adoption of that right as 7B came out, suddenly, I think, people realized that you could have these small performant models that, you know, were very fast. Inference time and time to first token were very cheap, which made a big difference if you were doing things at high throughput. How did you build something so rapidly? Or how did you focus a team on such a singular goal so quickly? Well, I guess we thought about what was missing in the field and we realized that small models were actually quite compelling for people. We saw a community building on top of llama at the time, on top of llama 7B. But llama 7B wasn't good enough. And so we realized that we could make it much better. We could make a 7B model much better. And so that's the sweet spot we targeted for our introduction to the world. And basically we had to build the entire stack from scratch. So getting the data, building the training code, getting the compute, which was a bit of a challenge because in these four months we were ramping up. So we started at zero GPUs and we actually trained on like 500 GPUs, Mistral-7B, and ... so I guess we went fast because the team was very motivated, and so not a lot of holidays during these four months. And generally speaking, AI teams that succeed and go are typically like four to five people. And AI teams that invent things have always been this size. So we are trying to have an organization where we have squads of five people working on data, working on pretraining, and so far this has worked out quite well. Is there anything you can share in terms of what's coming next in your roadmap? Yeah, so we have new open source models, both generalist and focused on specific verticals. So this is coming soon. We are introducing some new fine tuning features to the platform and we have introduced a chat-based assistant called Le Chat that we're ... that is currently just using the model. So it's pretty raw. It's a bit like chatGPT v-0, and we're actively building on building data connectors and ways to enrich it to make it a compelling solution for enterprises. What kind of verticals do you plan to focus on, or can you share that yet? Well, I guess we started with financial services because that's where most of the maturity was. We also ... Well, I guess we, basically we have two go-to-markets. So enterprises starting with financial services because they are mature enough, and the digital native. So talking to developers like building AI companies or introducing AI to formerly non-AI companies, and so that's the two, I guess, go-to-market pools that we're talking to. The first one through some partnerships with cloud because, as it turns out, they're a bit controlling the market in that respect. And then through our platform, we're talking directly to developers. I guess on the cloud side, one of the relationships you recently announced was with Microsoft and Azure. Is there anything you can say there about that relationship or that access that it's providing you to the enterprise? Yes, this opened up new customers. A lot of enterprises can't really use third party SaaS providers easily because you need to go through procurement, risk assessment, et cetera. But if you go as a third party provider through the cloud, you actually get an accelerator. And so, when we released Mistral Large on Azure, we got like 1000 customers pretty right away. So that's the ... The truth is you need to adapt to the fact that enterprises are using cloud and they don't want to introduce new platforms easily. And so you need to go through that, at least at the beginning. And then, one of the things that a lot of the industries focus on right now is scaling up models and, you know, ever larger, ever more performant versions. How do you think about the scale that you all are shooting for in the next six months or year? Or is the plan to have very large models over time? Or how do you think about the mix of things that you want to offer? Yeah, so we first focused on efficiency to be able to train models more efficiently than what was currently done. And then once we had achieved this efficiency, we started to scale so that's why we did another fundraising, and that's why we started to increase the amount of compute we had. And so we can expect new models that will be more powerful because we are pouring more compute in it, models that might be a bit larger, because when you grow the compute, you need to increase the capacity of models. But something that remains very important for us is to be super efficient at inference and to have models that are very compressed. And so that's the kind of model that will continue shipping, especially to the open source world. One of the things that was pointed out to me that I'd love to get your views on is that as you reach certain capabilities within a model, you can start to accelerate the pace at which you can build the next model, because you can use, say, a GPT-4 level model to do RLAIF, or to generate synthetic data, or to do other things that really accelerate what you're doing. Data labeling, all sorts of things, in some cases superhuman performance. How do you think about using models to bootstrap each other up? And does that actually accelerate the timeline for each subsequent release? Yeah, I guess. Generally speaking, two years ago, RLHF was very important. Today it's actually less important because the models have become better, and they're actually sometimes good enough to self supervise themselves. And what we have noticed is that we scale as we scale. This is definitely improving. So that means that the costly part of going through human annotations is actually reducing. And so this is also lowering the barrier of entrance. I guess another sort of adjacent area is reasoning. And a lot of people feel that as you scale up models, they'll naturally acquire reasoning. And then there's other approaches and entire companies that have recently been founded around just focusing on the reasoning aspect of some of these models. How do you think about that? Are you going to be training sub models for reasoning, or do you think it's just going to come out of scaling the existing models? Is it a mix of the two? Well, I guess reasoning comes from ... At this point, the only validated way of improving reasoning is to train models on larger data and make them bigger. There's obviously some possibilities that you have by building an auto loop, adding new function-calling, adding data as well for the model to reason about grounded aspects instead of trying to imagine stuff. So I guess we don't pretend to have like a secret recipe for reasoning, but we've made models that are pretty good at reasoning by focusing on the data. We're pretty good at using mathematics in our data. And so that's a good way of improving reasoning. There's many ways in which to improve it. Code has helped as well, and so that's ... there's no magic recipe, but just focusing on the little things makes it work. Yeah, I guess one of the reasons I ask is, I feel like if you look at sort of the world of AI, there's a few different approaches that have been done in the past. One is sort of the transformer based models and scaling them. The other is a little bit more in the lines of like, AlphaGO and poker and some of the gaming related approaches where you're doing self play as a way to bootstrap new strategies or new capabilities. And those are in some sense forms of reasoning. And I know that there are certain areas where that may be very natural to do in the context of model training. Code would be an example. There's a few others where you can sort of test things against real rubric. And so, you know, I don't know if you folks are considering things like that, or if that's important or not in your mind. So Guillaume and Timothee were doing theory improving with LLMs back in the day at Meta. So that's very linked to, well, using LLM as the reasoning brick and then building an auto loop that involves sampling, that involve Monte Carlo tree search, all these kind of things. I think the one thing that was standing in the way for this is the fact that models have very high latency, and if you want to sample heavily, you need to make them smaller. And so it's very much tied to efficiency. So as we grow efficiency, as hardware increases in capacity as well, you become more able to explore more and to sample more. And that's a good way effectively to increase reasoning through the autoloop development. And then I guess the other thing a lot more people are talking about or thinking about is memory and some ability to maintain a longer view of state in different ways across actions or chaining things for agents. Do you expect to go down any sort of agentic routes anytime soon? Or is the focus much more on sort of core APIs that are enabling in all sorts of ways? So that's what we started to enable with function calling, which is a good way to handle, to start creating agent that store states. So memory, when we talk about memory, like of conversation, the way you make it happen is that you basically introduce some crude functions on your middleware part that you give to the model, and so it can actually use that to update its memory and its representation. And so function calling is the one multipurpose tool that you can use to create complex settings, complex agent. It's hard to make it work, it's hard to evaluate as well. So I think this is going to be one of the biggest challenge. How do you make agent that work, evaluate them and make them work better for feedback? And this is one of the challenge that we'd like to tackle on the product side. And then I guess the one other thing that a lot of people have been talking about recently is just context window. And for example, I know that there's some recent results around, for example, biology models, where if you increase the context window, you can end up with better protein folding and things like that. So the context and the context length really matters. I think Gemini launched a million, up to a few millions sort of context window, and then magic, I think, has had 5 million for a while. How important do you think that is? Do you think that displaces other things like rag or fine tuning? Are all these things going to work coincident with each other? So it doesn't displace fine tuning because fine tuning has a very different purpose of pouring your preferences and basically demonstrating the task. On the other hand, it simplifies rag approaches because you can pour more knowledge into the context. And so what we hear from users is that it's like a drug. So once you start to use models with a large context, you don't want to go back. And so that's effectively something we want to try to improve and extend. There's a few techniques for making it happen. On the infrastructure side, it's actually quite a challenge because you need to handle very large attention matrices, but there are ways around it. I see what you're saying. So basically, like on the RAM, on the GPU, basically you ran out of space for something as you're building a bigger and bigger context window. Or is it something else? Yeah, there's a variety of techniques you need to rethink for sharding and communication to handle the big matrices. And then you do pay a cost because ... because you're ... yeah, the model ... it basically becomes slower because of the quadratic cost. When do you think we hit a moment where these models are better than humans at most white collar tasks? Do you think that's two years away, five years away, ten years away? I guess it depends on the task. There's already a few tasks on which the model are actually better. And so I expect this to unfold pretty quickly, actually. So hard to say a date, but I would say in three years this is going to look very different, especially if we find a way to deploy agent and to evaluate them and make them robust and reliable. What about displacing the CEO of Figma? No, I'm just kidding. Just kidding. Dylan, please keep us going. How do you think about ... So I guess there's a lot of different foundation models that people are starting to work on, right? There's obviously a lot of attention on the LLMs, and there have been diffusion models for image gen, although it seems like people are moving more and more towards image or transformer based approaches for image and video and other things. Are there big holes in terms of where you think there are gaps where people aren't building foundation models, but they should be? I would say we've seen some things happening on the robotic side, but it's ... I think it's still at the very early stage. On the audio, this is covered. On video, this is starting to be covered. Essentially, like models that can take actions and become very good at taking actions, I don't think this is very well covered. There's some progress to be made there, but, yeah, overall, I expect all of this to converge towards similar architectures and at the end of the day, like a joint training as we move forward in time. So do you think eventually everything is a transformer based model? Well, transformer are a very good way of representing associations between tokens or between information, so it really does not really matter, but it seems to be enough, it seems to be a sufficient representation to capture most of the thing we want to capture, so ... and we know how to train them well so we can transfer information between what we learn from text on images, et cetera. And so that's why I think this is going to be quite hard to displace. Do you think that'll also apply to the hard sciences? If you're trying to do, like, physics simulation, material sciences, pure math. It's not ging to be ... I don't expect like ... just next token prediction to solve that. And so you do need to move to the outer loop, and you need to figure out also a way to make models interact with simulators, potentially, because at some point, you need the model to learn the physics, and so you need to bootstrap that with the simulator. But I'm not an expert, to be honest. And then all these models, of course, need a lot of GPU, and people have very publicly talked about how there's a GPU crunch right now, and there's shortages of different sorts. When do you think that goes away, or do you think that goes away? So I think that probably eases as the H100 comes, and as the ... we'll start to see some competition on the hardware space, which is going to improve cost, I think. I expect that also as we move to foundational models that are multimodal, et cetera, we can actually train on more flops. And so I don't think we haven't hit the wall there in scaling. And so I expect this is probably going to continue on the training part and on the inference part as we move on to production and we have models running agent on the background. So really removing this bottleneck that we started ... that we had at the beginning, which was the speed at which we could read information, I expect that inference capacity will spread pretty significantly. Do you think that will be done through traditional GPU based approaches, or do you think we'll start having more and more custom asics, either for specific transformer models where you burn the weights on the silicon, or more generally for transformers in general, where you can just load a set of weights or something. So the good thing about the fact that everybody is using transformer is that you can specialize hardware to this architecture and you can make a lot of gains there. There's a few unfortunate bottleneck on Nvidia chips, for instance, the memory bandwidth is a problem. And so by moving on to more custom chips, you can remove ... you can reduce significantly the cost of inference. It's not really ready yet, so we're not betting on it right now, but I really expect that this is going to improve cost pretty significantly. So Mistral really started off as a developer centric product, right? You launched to something that was very open source. Now you're starting to serve a variety of enterprises. Is there any commonality in terms of the types of use cases that people are coming with or the areas that enterprises are most quickly adopting these sorts of technologies or approaches? Yeah. So enterprises adopts the technology for mostly three use cases. So the first one is developer productivity. And usually they kind of struggle with the off-the-shelf approach because it's not fed to their way of making, of developing. They also use knowledge management tools, and usually they've built their own assistant connected to their database. And the last thing is customer service. So the most mature company have made a large progress toward reducing their human engagement with customers and just making it much more efficient. And so these are really the free use cases we see with enterprises. And with AI companies, it's much more diverse because they are a bit more creative. But yeah, overall, enterprises have these free use cases. It's also the reason why we are starting to think of moving a bit on the value chain and offer things that are a bit more turnkey, because sometimes they need a little bit of help. Yeah, that makes sense. I'm guessing many people here saw the tweet from the CEO of Klarna where he's talking about customer success and how they, you know, added a series of tools based on top of OpenAI that basically reduced the number of people they need by 700 in terms of customer support. They launched it in a month and they had 2.3 million responses in that single month. So, it seems like there's this really big wave coming that I think is almost under discussed in terms of impact of productivity, impact of jobs and things like that. Yeah, so we saw like even more diverse use cases. One of them was having a platform that engaged with temporary workers to try and find a job for them. So through texting, and so the customer in question went from 150 people activating this well, engaging directly with customers to seven, and they were actually able to scale the platform much more and to enable temporary workers to work more easily. And generally speaking, this approach of automating more of the customer service is a way to improve the customer service. And so that's, I think, what is exciting about this technology. What do you think is missing right now, or what is preventing enterprise adoption from accelerating further? So our bet is that ... like ... they still struggle a bit to evaluate and to figure out how to verify that the model can actually be put in production. What's missing are a bunch of tools to do continuous integration, also tools to automatically improve whatever use case the LLM is used for. And so I think this is what is missing for developer adoption within enterprises. Now for user adoption within enterprises, I think we're still pretty far away from being ... from creating assistant that follows instruction well, that becomes ... that can be customized easily by users. And so yeah, on the user side, I think this is what is missing. One thing that I think you've been very thoughtful about is how to approach AI regulation. And I know that you've been involved with some of the conversations in terms of EU regulation and other regulation of AI. Could you explain your viewpoint in terms of what's important to focus on today versus in the future and how to think about it more generally? Yeah, so we had to speak up because at the time, in October, there was a big movement against open source AI. And so we had to explain that this was actually the right way to today, make the technology secure and well evaluated. And overall we've been continuously saying that we're merging very different conversations about existential risk, which is ill-defined and that has little scientific evidence for, where this is merged with a discussion about, I guess, national security and AI and LLMs being used to generate bioweapons. But this is ... again, this is something that is lacking evidence. And then there's a bunch of very important problems that we should be focusing on, which is how do you actually deploy models and control what they are saying? How do you handle biases? How do you set the editorial tone of a model in a way that you can evaluate and control? And I think this is the most important part. How do you build safe products that you can control well and that you can evaluate well? And this is the one thing we should be focusing on. That's what we've been saying for a couple of months because we were a bit forced to speak up. Yeah, it seems like one of the areas that people are kind of worried about in the short term on AI is things like deepfakes or people spoofing voices or other things like that, either for financial attacks, for political purposes, et cetera. Do you all have plans to go down the voice and sort of multimodality side? So generating things that are not text is effectively a bit more of a trap on the safety side, and that we have avoided, we've avoided it effectively. Imitating voices and deepfakes are very concerning. And this is not something that we pretend to be able to sort. Text, it's much easier because there's no ... it's ... it's never ... there's never this kind of problem. So you can generate ... text is ... generating text is never an enabler of very harmful behavior. Misinformation has been mentioned, but usually misinformation is bottlenecked by diffusion and not by creation. So by focusing on text, we kind of circumvent these issues, which are very real. I think one of the things that's very striking about Mistral is, and I should say in Europe in general right now, is there's a very robust startup scene. And if I look at the two biggest pockets of AI right now in terms of startup formation, it's basically here in Silicon Valley, and then it's like the Paris-London corridor, and you have eleven labs and you have Mistral and you have all these great companies forming. What do you think is driving that? I think there's a couple of historical reasons. In London, there was, and there still is DeepMind, which was a very strong attractor of talents across the world. And in Paris in 2018, both DeepMind and Google opened offices, research offices, and it went and augmented the existing research scene that was already pretty strong, because as it turns out, France and also a couple of other countries in the European Union have very good education pipeline. And so junior machine learning engineers and junior machine learning scientists are quite good. And so that's one of the reason why today we have a pretty strong ecosystem of companies on both the foundational layer, but also on the application layer. Yeah, the French seem a lot smarter than the British. So. No, I'm just kidding. I'm not the one saying that. The other thing that I think is kind of striking is you start to see a lot of different AI-based companies focused on regional differences. So, for example, when you launched, you included a variety of different european languages. I know there's models being built right now for Japan, for India, for a variety of different geos. And one could argue that either you have large global platform companies that serve everywhere, except for maybe China, because China is likely to be firewalled in some ways, just like it has been for the Internet more generally. Or you could imagine a world where you have regional champions emerge. And in particular, you could almost view it like Boeing versus Airbus, where the governments of specific regions decide that they really want to fund or become customers to local players. What do you view as sort of the future world, and how does that evolve in terms of global versus regional platforms? So we've taken a global approach to distribution. I guess there was another path that we could have taken, which was to focus on just the European market, pretending that there was any form of defensibility there. We don't think this is the case. Technology remains very fluid and so can circulate across countries. On the other hand, the technology we're building is effectively very linked to language, and language is, well, English is only one language across many. And as it turns out, LLMs are much better at English than other languages. So by also focusing more on different languages, we managed to make models that are very good at European languages in particular, versus the American models. And so there's a big market for that. And similarly, there's a big market in Asia for models that can speak asian languages. And there's a variety of scientific problems to be sorted and solved to address these markets, but those are huge, and those haven't been the focus of US companies. So it's effectively an opportunity for us as a European company to focus a bit more on the world globally. Okay, great. I think we can open up to a few questions from the audience, and if people want to just ask, I can always just repeat it. In the back there, please. Yeah, right there. If you want to speak loudly, I can repeat what you say. The question is, do you plan to release closed-source versions of your model or always be open source? So we have commercial models already. So to an extent we haven't been open sourcing everything. We are trying to ... I mean we are a very young company, but our purpose is to release the best open source models. And then we are basically coming up with an enterprise surrounding and some premium features that we can sell to sustain the business. And so our strategy today, and that might evolve with time, is to have both very strong open source models, but also models that are much stronger actually at that point in time as closed source APIs. The one thing that we focus on also for our commercial models is to make deployment of these models very portable and very flexible. So we have customers to whom we ship the weights and allow them to modify the model, do client-side finetuning the same way they would do it with open source models. And so in that sense, we have some coherence across the commercial family and the open source family. Cool. Ah, right behind the first question. Developer productivity. So, coding basically. And right there, please. Yeah, we have plans. Not doing any announcement today, but we do have plans. Yeah. The question, for the people who are being streamed, is whether or not there is a plan to do code-specific models. Right there. We've been mostly into production at that point because the team was pretty lean, but we have ... we're now dedicating a couple of full time employees like finding new architectures, finding well, making research. And I think this is super important to remain relevant. So as we scale, we will be able to afford more exploration. It's also very linked to the compute capacity you have. So if you want to make some discoveries and make some progress, you need to have enough compute. And we're a bit compute-bound because of the shortage on H100, but this is going to improve favorably. So we expect to be doing more research and more exploratory research, I guess because we've been doing research from the start. I guess related to that, it seems like, in general, your team has a very strong bias for action, and you move very quickly. How do you select for that in people that you hire? Are there specific things you look for? Interview questions you ask? So we look for AI scientists that can do everything from going down the infrastructure stack to making extract, transform and load pipelines to thinking about mathematics. So we've been trying to find full stack AI engineers, and they tend to have a strong bias for action. Like, really the focus we had is to find low ego people willing to get their hands dirty with jobs that are considered boring by some AI scientists because it's a bit boring. But this has been actually quite productive. And because we focused on the right things. Oh, in the back. I guess the team is now quite big, so there's a bunch of challenges associated to that. I was surprised by the amount of inbound that we had and the amount of representation that I had to do, especially as we got drawn into political stuff, which we would rather have avoided, but we kind of didn't have a choice. So this was definitely a surprise for me, generally speaking. I was also surprised by ... like the speed we managed to have, because it actually exceeded our expectations. But, yeah, I had pretty little idea of what the job of a founder would be when we started. It's quite fun, but it's effectively surprising. I was imagining myself as still coding after a year, and it's actually no longer the case, unfortunately. But, yeah, that's the price of trying to scale up pretty quickly. You get to do HR coding now, which is even better. Yeah. Any other questions? Please. So the reason why we started the company is to have a production arm that creates fission value, to our research arm. And to be honest, there is no ... there isn't much demonstration of existence of such organs, because you do have a few research labs that are tied to cloud companies that have a very big top line and using it to sustain research. We think that with AI and with the value that the technology brings, there is a way for doing it. But I guess this still remains to be shown. And that's the experiment we are making with Mistral. Probably. One last question. I know Arthur has a hard stop. Maybe way in the back there. Yes, I think you can squeeze it to that point. The question is whether you can have a 7B model that beats Mistral Large. This is ... It starts to be a bit tricky, but there might be way. I also expect the hardware to improve, like the local hardware to improve. And so, that will also give a little more space and a little more memory. And yeah, I see more potential there, because effectively you're a bit constrained by scaling loads. That tells you that at some point you do saturate the capacity of models of a certain size. What is the main constraint? Or what do you think is the thing that it asymptotes against? For scaling loads? Or? I mean, you can make 7B models very strong if you focus on a specific task. But if you want to pour all of the knowledge of the world onto 7GB, well, it's actually quite ambitious. So one thing is, for instance, multilingual models at this size are not a great idea. So you do need to focus on a specific part of the human knowledge you want to compress. I guess one last question for me and then we can wrap up is, you know, a friend of mine pointed this out to me, which basically, if you think about what you do when you're training a model is you spin up a giant data center or supercomputer and then you run it for n weeks or months or however long you decide to train for, and then the output is a file. Yeah, that's ... you're ... you're basically zipping the ... yeah, the world knowledge. It's not much more than that, actually. Yeah. How do you think about either forms of continuous training or retraining over time or sort of longer training runs that get tacked on? I know some people are basically training longer or longer and then dropping a model and then they keep training and then they drop a model. And so I don't know how you think about where the world heads. Yeah, this is an efficient way of training, so that's definitely interesting for us. Okay, great. Well, please join me in thanking Arthur.
Please welcome AI researcher and founding member of OpenAI, Andrej Karpathy. Hi, everyone. I'm happy to be here to tell you about the state of GPT and more generally about the rapidly growing ecosystem of large language models. I would like to partition the talk into two parts. In the first part, I would like to tell you about how we train GPT AssistanTS, and then in the second part, we're going to take a look at how we can use these assistants effectively for your applications. So first, let's take a look at the emerging recipe for how to train these assistants and keep in mind that this is all very new and still rapidly evolving, but so far, the recipe looks something like this. Now, this is kind of a complicated slide, I'm going to go through it piece by piece, but roughly speaking, we have four major stages, pretraining, supervised finetuning, reward modeling, reinforcement learning, and they follow each other serially. Now, in each stage, we have a dataset that is, that powers that stage. We have an algorithm that for our purposes will be a objective and over for training the neural network, and then we have a resulting model, and then there are some notes on the bottom. So the first stage we're going to start with is the pretraining stage. Now, this stage is kind of special in this diagram, and this diagram is not to scale because this stage is where all of the computational work basically happens. This is 99 percent of the training compute time and also flops. This is where we are dealing with Internet scale datasets with thousands of GPUs in a supercomputer and also months of training potentially. The other three stages are finetuning stages that are much more along the lines of small few number of GPUs and hours or days. So let's take a look at the pretraining stage to achieve a base model. First, we are going to gather a large amount of data. Here's an example of what we call a data mixture that comes from this paper that was released by Meta where they released this LLaMA based model. Now, you can see roughly the kinds of datasets that enter into these collections. We have CommonCrawl, which is a web scrape, C4, which is also CommonCrawl, and then some high quality datasets as well. So for example, GitHub, Wikipedia, Books, Archives, Stock Exchange and so on. These are all mixed up together, and then they are sampled according to some given proportions, and that forms the training set for the nerual net, for the GPT. Now before we can actually train on this data, we need to go through one more preprocessing step, and that is tokenization. And this is basically a translation of the raw text that we scrape from the Internet into sequences of integers because that's the native representation over which GPTs function. Now, this is a lossless kind of translation between pieces of texts and tokens and integers, and there are a number of algorithms for the stage. Typically, for example, you could use something like byte pair encoding, which iteratively merges little text chunks and groups them into tokens. And so here I'm showing some example chunks of these tokens, and then this is the raw integer sequence that will actually feed into a transformer. Now, here I'm showing two sort of like examples for hybrid parameters that govern this stage. So GPT-4, we did not release too much information about how it was trained and so on, so I'm using GPT-3s numbers, but GPT-3 is of course a little bit old by now, about three years ago. But LLaMA is a fairly recent model from Meta. These are roughly the orders of magnitude that we're dealing with when we're doing pretraining. The vocabulary size is usually a couple 10,000 tokens. The context length is usually something like 2,000, 4,000, or nowadays even 100,000, and this governs the maximum number of integers that the GPT will look at when it's trying to predict the next integer in a sequence. You can see that roughly the number of parameters say, 65 billion for LLaMA. Now, even though LLaMA has only 65B parameters compared to GPP-3's 175 billion parameters, LLaMA is a significantly more powerful model, and intuitively, that's because the model is trained for significantly longer. In this case, 1.4 trillion tokens, instead of 300 billion tokens. So you shouldn't judge the power of a model just by the number of parameters that it contains. Below, I'm showing some tables of rough number of hyperparameters that typically go into specifying the transformer neural network, So the number of heads, the dimension size, number of layers, and so on, and on the bottom I'm showing some training hyperparameters. So for example, to train the 65B model, Meta used 2,000 GPUs, roughly 21 days of training and a roughly several million dollars. So that's the rough orders of magnitude that you should have in mind for the pre-training stage. Now, when we're actually pre-training, what happens? Roughly speaking, we are going to take our tokens, and we're going to lay them out into data batches. So we have these arrays that will feed into the transformer, and these arrays are B, the batch size and these are all independent examples stocked up in rows and B by T, T being the maximum context length. So in my picture I only have 10 the context lengths, so this could be 2,000, 4,000, etc. These are extremely long rows. What we do is we take these documents, and we pack them into rows, and we delimit them with these special end of texts tokens, basically telling the transformer where a new document begins. An so here, I have a few examples of documents and then I stretch them out into this input. Now, we're going to feed all of these numbers into transformer. Let me just focus on a single particular cell, but the same thing will happen at every cell in this diagram. So let's look at the green cell. The green cell is going to take a look at all of the tokens before it, so all of the tokens in yellow, and we're going to feed that entire context into the transformer neural network, and the transformer is going to try to predict the next token in a sequence, in this case in red. Now the transformer, I don't have too much time to, unfortunately, go into the full details of this neural network architecture is just a large blob of neural net stuff for our purposes, and it's got several, 10 billion parameters typically or something like that. Of course, as I tune these parameters, you're getting slightly different predicted distributions for every single one of these cells. And so for example, if our vocabulary size is 50,257 tokens, then we're going to have that many numbers because we need to specify a probability distribution for what comes next. So basically, we have a probability for whatever may follow. Now, in this specific example, for this specific cell, 513 will come next, and so we can use this as a source of supervision to update our transformers weights. And so we're applying this basically on every single cell in the parallel, and we keep swapping batches, and we're trying to get the transformer to make the correct predictions over what token comes next in a sequence. So let me show you more concretely what this looks like when you train one of these models. This is actually coming from the New York Times, and they trained a small GPT on Shakespeare. And so here's a small snippet of Shakespeare, and they train their GPT on it. Now, in the beginning, at initialization, the GPT starts with completely random weights. So you're getting completely random outputs as well. But over time, as you train the GPT longer and longer, you are getting more and more coherent and consistent sort of samples from the model, and the way you sample from it, of course, is you predict what comes next, you sample from that distribution and you keep feeding that back into the process, and you can basically sample large sequences. And so by the end, you see that the transformer has learned about words and where to put spaces and where to put commas and so on. And so we're making more and more consistent predictions over time. These are the kinds of plots that you are looking at when you're doing model pretraining. Effectively, we're looking at the loss function over time as you train, and low loss means that our transformer is predicting the correct, is giving a higher probability to the next correct integer in a sequence. Now what are we going to do with this model once we've trained it after a month? Well, the first thing that we noticed, we, the field, is that these models basically in the process of language modeling, learn very powerful general representations, and it's possible to very efficiently fine tune them for any arbitrary downstream tasks you might be interested in. So as an example, if you're interested in sentiment classification, the approach used to be that you collect a bunch of positives and negatives and then you train some kind of an NLP model for that, but the new approach is: ignore sentiment classification, go off and do large language model pretraining, train a large transformer, and then you can only, you may only have a few examples and you can very efficiently fine tune your model for that task. And so this works very well in practice. And the reason for this is that basically the transformer is forced to multitask a huge amount of tasks in the language modeling task, because just in terms of predicting the next token, it's forced to understand a lot about the structure of the text and all the different concepts therein. So that was GPT-1. Now around the time of GPT-2, people noticed that actually even better than fine tuning, you can actually prompt these models very effectively. These are language models and they want to complete documents, you can actually trick them into performing tasks by just arranging these fake documents. So in this example, for example, we have some passage and then we sort of like do QA, QA, QA. This is called Few-shot prompt, and then we do Q, and then as the transformer is tried to complete the document it's actually answering our question. This is an example of prompt engineering based model, making it believe that it's sort of imitating a document and getting it to perform a task. And so this kicked off, I think the era of, I would say, prompting over fine tuning and seeing that this actually can work extremely well on a lot of problems, even without training any neural networks, fine tuning or so on. Now since then, we've seen an entire evolutionary tree of base models that everyone has trained. Not all of these models are available. for example, the GPT-4 base model was never released. The GPT-4 model that you might be interacting with over API is not a base model, it's an assistant model, and we're going to cover how to get those in a bit. GPT-3 based model is available via the API under the name Davinci and GPT-2 based model is available even as weights on our GitHub repo. But currently the best available base model probably is the LLaMA series from Meta, although it is not commercially licensed. Now, one thing to point out is base models are not assistants. They don't want to answer, they don't want to make answers to your questions, they just want to complete documents. So if you tell them to write a poem about the bread and cheese, it will answer questions with more questions, it's completing what it thinks is a document. However, you can prompt them in a specific way for base models that is more likely to work. So as an example, here's a poem about bread and cheese, and in that case it will autocomplete correctly. You can even trick base models into being assistants. And he way you would do this is you would create a specific few-shot prompt that makes it look like there's some kind of document between the human and assistant and they're exchanging sort of information. Then at the bottom, you put your query at the end and the base model will sort of condition itself into being a helpful assistant and answer, but this is not very reliable and doesn't work super well in practice, although it can be done. Instead, we have a different path to make actual GPT assistants not just base model document completers. And so that takes us into supervised finetuning. In the supervised finetuning stage, we are going to collect small but high quality datasets, and in this case, we're going to ask human contractors to gather data of the form prompt and ideal response. And we're going to collect lots of these typically tens of thousands or something like that. Then we're going to still do language modeling on this data. So nothing changed algorithmically, we're swapping out a training set. So it used to be Internet documents, which has a high quantity low quality for basically QA prompt response data, and that is low quantity, high quality. So we still do language modeling and then after training, we get an SFT model. You can actually deploy these models and they are actual assistants and they work to some extent. Let me show you what an example demonstration might look like. Here's something that a human contractor might come up with. Here's some random prompt. Can you write a short introduction about the relevance of the term monopsony or something like that? And then the contractor also writes out an ideal response. And when they write out these responses, they are following extensive labeling documentations and they are being asked to be helpful, truthful, and harmless. And these labeling instructions here, you probably can't read it, neither can I, but they're long and this is just people following instructions and trying to complete these prompts. So that's what the dataset looks like. And you can train these models. And this works to some extent. Now, you can actually continue the pipeline from here on, and go into RLHF, reinforcement learning from human feedback that consists of both reward modeling and reinforcement learning. So et me cover that and then I'll come back to why you may want to go through the extra steps and how that compares to SFT models. So in the reward modeling step, what we're going to do is we're now going to shift our data collection to be of the form of comparisons. So here's an example of what our dataset will look like. I have the same prompt, identical prompt on the top, which is asking the assistant to write a program or a function that checks if a given string is a palindrome. And then what we do is we take the SFT model which we've already trained and we create multiple completions. So in this case, we have three completions that the model has created, and then we ask people to rank these completions. So if you stare at this for a while, and by the way, these are very difficult things to do to compare some of these predictions. This can take people even hours for a single prompt completion pairs, but let's say we decided that one of these is much better than the others and so on. So we rank them. Then we can follow that with something that looks very much like a binary classification on all the possible pairs between these completions. So what we do now is, we lay out our prompt in rows, and the prompt is identical across all three rows here. So it's all the same prompt, but the completion of this varies. And so the yellow tokens are coming from the SFT model. Then what we do is we append another special reward readout token at the end and we basically only supervise the transformer at this single green token. And the transformer will predict some reward for how good that completion is for that prompt and so basically it makes a guess about the quality of each completion. And then once it makes a guess for every one of them, we also have the ground truth which is telling us the ranking of them. And so we can actually enforce that some of these numbers should be much higher than others, and so on. We formulate this into a loss function and we train our model to make reward predictions that are consistent with the ground truth coming from the comparisons from all these contractors. So that's how we train our reward model. And that allows us to score how good a completion is for a prompt. Once we have a reward model, we can't deploy this because this is not very useful as an assistant by itself, but it's very useful for the reinforcement learning stage that follows now. Because we have a reward model, we can score the quality of any arbitrary completion for any given prompt. So what we do during reinforcement learning is we basically get, again, a large collection of prompts and now we do reinforcement learning with respect to the reward model. So here's what that looks like. We take a single prompt, we lay it out in rows, and now we use the SFT, we use basically the model we'd like to train which was initialized at SFT model to create some completions in yellow, and then we append the reward token again and we read off the reward according to the reward model, which is now kept fixed. It doesn't change any more. And now the reward model tells us the quality of every single completion for these prompts and so what we can do is we can now just basically apply the same language modeling loss function, but we're currently training on the yellow tokens, and we are weighing the language modeling objective by the rewards indicated by the reward model. So as an example, in the first row, the reward model said that this is a fairly high-scoring completion and so all the tokens that we happen to sample on the first row are going to get reinforced and they're going to get higher probabilities for the future. Conversely, on the second row, the reward model really did not like this completion, -1.2. And so therefore, every single token that we sampled in that second row is going to get a slightly higher probability for the future. And we do this over and over on many prompts on many batches and basically, we get a policy which creates yellow tokens here and it's basically all of them all the completions here will score high according to the reward model that we trained in the previous stage. So that's how we train, that's what the RLHF pipeline is. Now, and then at the end, you get a model that you could deploy. So as an example, ChatGPT is an RLHF model, but some other models that you might come across for example, Vicuna-13B, and so on, these are SFT models. So we have base models, SFT models, and RLHF models and that's the state of things there. Now why would you want to do RLHF? So one answer that's not that exciting is that it just works better. So this comes from the InstructGPT paper. According to these experiments a while ago now, these PPO models are RLHF and we see that they are basically just preferred in a lot of comparisons when we give them to humans. So humans just prefer basically tokens that come from RLHF models compared to SFT models, compared to base model that is prompted to be an assistant. And so it just works better. But you might ask why, why does it work better? I don't think that there's a single amazing answer that the community has really like agreed on, but I will just offer one reason potentially. And i t has to do with the asymmetry between how easy computationally it is to compare versus generate. Let's take an example of generating a haiku. Suppose I ask a model to write a haiku about paper clips. If you're a contractor trying to give, train data, then imagine being a contractor collecting basically data for the SFT stage, how are you supposed to create a nice haiku for a paper clip? You might just not be very good at that, but if I give you a few examples of haikus you might be able to appreciate some of these haikus a lot more than others. And so judging which one of these is good is a much easier task. And so basically, this asymmetry makes it so that comparisons are a better way to potentially leverage yourself as a human and your judgment to create a slightly better model. Now, RLHF models are not strictly an improvement on the base models in some cases. So in particular, we've noticed for example that they lose some entropy. So that means that they give more peaky results. They can output lower variations like, they can output samples with lower variation than the base model. So base model has lots of entropy and will give lots of diverse outputs. So, for example, one place where I still prefer to use a base model is in the setup where you basically have n things and you want to generate more things like it. And so here is an example that I just cooked up. I want to generate cool Pokemon names. I gave it seven Pokemon names and I asked the base model to complete the document and it gave me a lot more Pokemon names. These are fictitious. I tried to look them up. I don't believe they're actual Pokemons. And this is the kind of task that I think the base model would be good at because it still has lots of entropy. It'll give you lots of diverse cool kind of more things that look like whatever you give it before. So this is what, this is number... having said all that, these are kind of the assistant models that are probably available to you at this point. There is a team at Berkeley that ranked a lot of the available assistant models and gave them basically Elo ratings. Currently, some of the best models, of course, are GPT-4, by far, I would say, followed by Claude, GPT-3.5, and then a number of models, some of these might be available as weights, like Vicuna, Koala, etc. And the first three rows here are all RLHF models and all of the other models to my knowledge, are SFT models, I believe. Okay, so that's how we train these models on the high level. Now I'm going to switch gears and let's look at how we can best apply the GPT assistant model to your problems. Now, I would like to work in setting of a concrete example. So let's work with a concrete example here. Let's say that you are working on an article or a blog post, and you're going to write this sentence at the end. "California's population is 53 times that of Alaska." So for some reason, you want to compare the populations of these two states. Think about the rich internal monologue and tool use and how much work actually goes computationally in your brain to generate this one final sentence. So here's maybe what that could look like in your brain. Okay, for this next step, let me blog on my blog, let me compare these two populations. Okay, first I'm going to obviously need to get both of these populations. Now, I know that I probably don't know these populations off the top of my head so I'm kind of like aware of what I know or don't know of my self-knowledge. So I go, I do some tool use and I go to Wikipedia and I look up California's population and Alaska's population. Now, I know that I should divide the two, but again, I know that dividing 39.2 by 0.74 is very unlikely to succeed. That's not the kind of thing that I can do in my head and so therefore, I'm going to rely on a calculator so I'm going to use a calculator, punch it in and see that the output is roughly 53. Then maybe I do some reflection and sanity checks in my brain so does 53 make sense? Well, that's quite a large fraction, but then California is the most populous state, so maybe that looks okay. Then I have all the information I might need, and now I get to sort of the creative portion of writing. So I might start to write something like "California has 53x times greater" and then I think to myself, that's actually like really awkward phrasing so let me actually delete that and let me try again. And so as I'm writing, I have this separate process, almost inspecting what I'm writing and judging whether it looks good or not and then maybe I delete and maybe I reframe it, and then maybe I'm happy with what comes out. So basically long story short, a ton happens under the hood in terms of your internal monologue when you create sentences like this. But what does a sentence like this look like when we are training a GPT on it? From GPT's perspective, this is just a sequence of tokens. So GPT, when it's reading or generating these tokens, it just goes chunk, chunk, chunk, chunk and each chunk is roughly the same amount of computational work for each token. And these transformers are not very shallow networks they have about 80 layers of reasoning, but 80 is still not like too much. And so this transformer is going to do its best to imitate, but of course, the process here looks very very different from the process that you took. So in particular, in our final artifacts in the datasets that we create, and then eventually feed to LLMs, all that internal dialogue is completely stripped and unlike you, the GPT will look at every single token and spend the same amount of compute on every one of them. And so, you can't expect it to actually like, well you can't expect it to do too much work per token and also in particular, basically these transformers are just like token simulators, so they don't know what they don't know. Like, they just imitate the next token. They don't know what they're good at or not good at. They just tried their best to imitate the next token. They don't reflect in the loop. They don't sanity check anything. They don't correct their mistakes along the way by default, they just sample token sequences. They don't have separate inner monologue streams in their head, right? They're evaluating what's happening. Now, they do have some cognitive advantages, I would say and that is that they do actually have a very large fact-based knowledge across a vast number of areas because they have, say, several, 10 billion parameters. So That's a lot of storage for a lot of facts. But, and they also, I think have a relatively large and perfect working memory. So whatever fits into the context window is immediately available to the transformer through its internal self attention mechanism and so it's kind of like perfect memory, but it's got a finite size, but the transformer has a very direct access to it and so it can like losslessly remember anything that is inside its context window. So this is kind of how I would compare those two and the reason I bring all of this up is because I think to a large extent, prompting is just making up for this sort of cognitive difference between these two kind of architectures like our brains here and LLM brains. You can look at it that way almost. So here's one thing that people found for example works pretty well in practice. Especially if your tasks require reasoning, you can't expect the transformer to do too much reasoning per token. And so you have to really spread out the reasoning across more and more tokens. So for example, you can't give a transformer a very complicated question and expect it to get the answer in a single token. There's just not enough time for it. These transformers need tokens to think, quote and quote, I like to say sometimes. And so this is some of the things that work well, you may for example have a few-shot prompt that shows the transformer that it should like show its work when it's answering a question and if you give a few examples, the transformer will imitate that template and it will just end up working out better in terms of its evaluation. Additionally, you can elicit this kind of behavior from the transformer by saying, let's think step-by-step because this conditions the transformer into sort of showing its work and because it kind og snaps into a mode of showing its work, it's going to do less computational work per token, and so it's more likely to succeed as a result because it's making slower reasoning over time. Here's another example, this one is called self-consistency. We saw that we had the ability to start writing and then if it didn't work out, I can try again and I can try multiple times and maybe select the one that worked best. So in these kinds of approaches, you may sample not just once, but you may sample multiple times and then have some process for finding the ones that are good and then keeping just those samples or doing a majority vote or something like that. Basically these transformers in the process as they predict the next token, just like you, they can get unlucky and they could sample a not a very good token and they can go down like a blind alley in terms of reasoning. And so unlike you, they cannot recover from that. They are stuck with every single token they sample and so they will continue the sequence, even if they even know that this sequence is not going to work out. So give them the ability to look back, inspect or try to find, try to basically sample around it. Here's one technique also, you could, it turns out that actually LLMs, like they know when they've screwed up, so as an example, say you ask the model to generate a poem that does not rhyme and it might give you a poem, but it actually rhymes. But it turns out that especially for the bigger models like GPT-4, you can just ask it "did you meet the assignment?" And actually GPT-4 knows very well that it did not meet the assignment. It just kind of got unlucky in its sampling. And so it will tell you, "No, I didn't actually meet the assignment here. Let me try again." But without you prompting it, it doesn't even, like it doesn't know to revisit and so on. So you have to make up for that in your prompts, you have to get it to check, if you don't ask it to check, its not going to check by itself it's just a token simulator. I think more generally, a lot of these techniques fall into the bucket of what I would say recreating our System 2. So you might be familiar with the System 1 and System 2 thinking for humans. System 1 is a fast automatic process and I think kind of corresponds to an LLM just sampling tokens. And system 2 is the slower deliberate planning sort of part of your brain. And so this is a paper actually from just last week because this space is pretty quickly evolving, it's called Tree of Thought. And in Tree of Though, the authors of this paper proposed maintaining multiple completions for any given prompt and then they are also scoring them along the way and keeping the ones that are going well if that makes sense. And so a lot of people are really playing around with prompt engineering to basically bring back some of these abilities that we sort have in our brain for LLMs. Now, one thing I would like to note here is that this is not just a prompt. This is actually prompts that are together used with some Python glue code because you actually have to maintain multiple prompts and you also have to do some tree search algorithm here to like figure out which prompts to expand, etc. It's a symbiosis of Python glue code and individual prompts that are called in a while loop or in a bigger algorithm. I also think there's a really cool parallel here to AlphaGo. AlphaGo has a policy for placing the next stone when it plays go, and its policy was trained originally by imitating humans. But in addition to this policy, it also does Monte Carlo Tree Search. And basically, it will play out a number of possibilities in its head and evaluate all of them and only keep the ones that work well. And so I think this is an equivalent of AlphaGo but for text if that makes sense. So, just like Tree of Thought, I think more generally people are starting to really explore more general techniques of not just the simple question-answer prompts, but something that looks a lot more like Python glue code stringing together many prompts. So on the right, I have an example from this paper called React where they structure the answer to a prompt as a sequence of thought-action-observation, thought-action-observation, and it's a full rollout, kind of a thinking process to answer the query. And in these actions, the model is also allowed to tool use. On the left, I have an example of AutoGPT. Now AutoGPT by the way became, is a project that I think got a lot of hype recently, and I think, but I think I still find it inspirationally interesting. It's a project that allows an LLM to keep the task list and continue to recursively break down tasks. And I don't think this currently works very well and I would not advise people to use it in practical applications. I just think it's something to generally take inspiration from in terms of where this is going, I think over time. So that's like giving our model System 2 thinking. The next thing I find kind of interesting is, this following serve I would say almost psychological quirk of LLMs, is that LLMs don't want to succeed, they want to imitate. You want to succeed, and you should ask for it. So what I mean by that is, when transformers are trained, they have training sets and there can be an entire spectrum of performance qualities in their training data. So for example, there could be some kind of a prompt for some physics question or something like that, and there could be a student's solution that is completely wrong but there can also be an expert answer that is extremely right. And transformers can't tell the difference between low, they know about low-quality solutions and high-quality solutions, but by default, they want to imitate all of it because they're just trained on language modeling. And so, att test time, you actually have to ask for a good performance. In this example in this paper, they tried various prompts and let's think step-by-step was very powerful because it spread out the reasoning over many tokens. But what worked even better is, let's work this out in a step-by-step way to be sure we have the right answer. And so it's like conditioning on getting the right answer, and this actually makes the transformer work better because the transformer doesn't have to now hedge its probability mass on low-quality solutions, as ridiculous as that sounds. And so, basically, feel free to ask for a strong solution. Say something like, you are a leading expert on this topic. Pretend you have IQ 120, etc. But don't try to ask for too much IQ because if you ask for IQ 400, you might be out of data distribution, or even worse, you could be in data distribution for something like sci-fi stuff and it will start to like take on some sci-fi, or like roleplaying or something like that. So you have to find the right amount of IQ. I think it's got some U-shaped curve there. Next up, as we saw when we are trying to solve problems, we know what we are good at and what we're not good at, and we lean on tools computationally. You want to do the same potentially with your LLMs. So in particular, we may want to give them calculators, code interpreters, and so on, the ability to do search, and there's a lot of techniques for doing that. One thing to keep in mind, again, is that these transformers by default may not know what they don't know. So you may even want to tell the transformer in a prompt you are not very good at mental arithmetic. Whenever you need to do very large number addition, multiplication, or whatever, instead, use this calculator. Here's how you use the calculator, you use this token combination, etc, etc. You have to actually spell it out because the model by default doesn't know what it's good at or not good at, necessarily, just like you and I might be. Next up, I think something that is very interesting is we went from a world that was retrieval only all the way, the pendulum has swung to the other extreme where it's memory only in LLMs. But actually, there's this entire space in-between of these retrieval-augmented models and this works extremely well in practice. As I mentioned, the context window of a transformer is its working memory. If you can load the working memory with any information that is relevant to the task, the model will work extremely well because it can immediately access all that memory. And so I think a lot of people are really interested in basically retrieval-augment generation. On the bottom, I have an example of LlamaIndex which is one data connector to lots of different types of data. And you can make it, you can index all of that data and you can make it accessible to LLMs. The emerging recipe there is you take relevant documents, you split them up into chunks, you embed all of them, and you basically get embedding vectors that represent that data. You store that in the vector store and then at test time, you make some kind of a query to your vector store and you fetch chunks that might be relevant to your task and you stuff them into the prompt and then you generate. So this can work quite well in practice. This is, I think, similar to when you and I solve problems. You can do everything from your memory and transformers have very large and extensive memory, but also it really helps to reference some primary documents. So whenever you find yourself going back to a textbook to find something, or whenever you find yourself going back to documentation of the library to look something up, transformers definitely want to do that too. You have some memory over how some documentation of the library works but it's much better to look it up. So the same applies here. Next, I wanted to briefly talk about constraint prompting. I also find this very interesting. This is basically techniques for forcing a certain template in the outputs of LLMs. So guidance is one example from Microsoft actually. And here we are enforcing that the output from the LLM will be JSON. And this will actually guarantee that the output will take on this form because they go in and they mess with the probabilities of all the different tokens that come out of the transformer and they clamp those tokens and then the transformer is only filling in the blanks here, and then you can enforce additional restrictions on what could go into those blanks. This might be really helpful, and I think this kind of constraint sampling is also extremely interesting. I also want to say a few words about fine tuning. It is the case that you can get really far with prompt engineering, but it's also possible to think about fine tuning your models. Now, fine tuning models means that you are actually going to change the weights of the model. It is becoming a lot more accessible to do this in practice, and that's because of a number of techniques that have been developed and have libraries for very recently. So for example parameter efficient fine tuning techniques like Lora, make sure that you're only training small, sparse pieces of your model. So most of the model is kept clamped at the base model and some pieces of it are allowed to change and this still works pretty well empirically and makes it much cheaper to sort of, tune only small pieces of your model. It also means that because most of your model is clamped, you can use very low precision inference for computing those parts because you are not going to be updated by gradient descent and so that makes everything a lot more efficient as well. And in addition, we have a number of open source, high-quality base models. Currently, as I mentioned, I think LLaMa is quite nice, although it is not commercially licensed, I believe right now. Some things to keep in mind is that basically fine tuning is a lot more technically involved. It requires a lot more, I think, technical expertise to do right. It requires human data contractors for datasets and/or synthetic data pipelines that can be pretty complicated. This will definitely slow down your iteration cycle by a lot, and I would say on a high level SFT is achievable because it is just, you're continuing the language modeling task. It's relatively straightforward, but RLHF, I would say is very much research territory and is even much harder to get to work, and so I would probably not advise that someone just tries to roll their own RLHF of implementation. These things are pretty unstable, very difficult to train, not something that is, I think, very beginner friendly right now, and it's also potentially likely also to change pretty rapidly still. So I think these are my sort of default recommendations right now. I would break up your task into two major parts. Number 1, achieve your top performance, and Number 2, optimize your performance in that order. Number 1, the best performance will currently come from GPT-4 model. It is the most capable of all by far. Use prompts that are very detailed. They have lots of task content, relevant information and instructions. Think along the lines of what would you tell a task contractor if they can't email you back, but then also keep in mind that a task contractor is a human and they have inner monologue and they're very clever, etc. LLMs do not possess those qualities. So make sure to think through the psychology of the LLM almost and cater prompts to that. Retrieve and add any relevant context and information to these prompts. Basically refer to a lot of the prompt engineering techniques. Some of them I've highlighted in the slides above, but also this is a very large space and I would just advise you to look for prompt engineering techniques online. There's a lot to cover there. Experiment with few-shot examples. What this refers to is, you don't just want to tell, you want to show whenever it's possible. So give it examples of everything that helps it really understand what you mean if you can. Experiment with tools and plug-ins to offload tasks that are difficult for LLMs natively, and then think about not just a single prompt and answer, think about potential chains and reflection and how you glue them together and how you can potentially make multiple samples and so on. Finally, if you think you've squeezed out prompt engineering, which I think you should stick with for a while, look at some potentially fine tuning a model to your application, but expect this to be a lot more slower in the vault and then there's an expert fragile research zone here and I would say that is RLHF, which currently does work a bit better than SFT if you can get it to work. But again, this is pretty involved, I would say. And to optimize your costs, try to explore lower capacity models or shorter prompts and so on. I also wanted to say a few words about the use cases in which I think LLMs are currently well suited for. So in particular, note that there's a large number of limitations to LLMs today, and so I would keep that definitely in mind for all of your applications. Models, and this by the way could be an entire talk. So I don't have time to cover it in full detail. Models may be biased, they may fabricate, hallucinate information, they may have reasoning errors, they may struggle in entire classes of applications, they have knowledge cut-offs, so they might not know any information above, say, September, 2021. They are susceptible to a large range of attacks which are sort of coming out on Twitter daily, including prompt injection, jailbreak attacks, data poisoning attacks and so on. So my recommendation right now is use LLMs in low-stakes applications. Combine them always with human oversight. Use them as a source of inspiration and suggestions and think co-pilots, instead of completely autonomous agents that are just like performing a task somewhere. It's just not clear that the models are there right now. So I wanted to close by saying that GPT-4 is an amazing artifact. I'm very thankful that it exists, and it's beautiful. It has a ton of knowledge across so many areas. It can do math, code and so on. And in addition, there's this thriving ecosystem of everything else that is being built and incorporated into the ecosystem. Some of these things I've talked about, and all of this power is accessible at your fingertips. So here's everything that's needed in terms of code to ask GPT-4 a question, to prompt it, and get a response. In this case, I said, can you say something to inspire the audience of Microsoft Build 2023? And I just punched this into Python and verbatim GPT-4 said the following: And by the way, I did not know that they used this trick in the keynote. So I thought I was being clever, but it is really good at this. It says, ladies and gentlemen, innovators and trailblazers Microsoft Build 2023. Welcome to the gathering of brilliant minds like no other, you are the architects of the future, the visionaries molding the digital realm in which humanity thrives. Embrace the limitless possibilities of technologies and let your ideas soar as high as your imagination. Together, let's create a more connected, remarkable, and inclusive world for generations to come. Get ready to unleash your creativity, canvas the unknown, and turn dreams into reality. Your journey begins today!

No dataset card yet

Downloads last month
7