So, I'm now super thrilled to introduce to you a legendary person, someone who has been hacking and educating at the forefront of AI for over a decade. From neural networks to computer vision, from natural language processing to reinforcement learning, he has pushed the boundaries and inspired millions all over the world, including I think all of us here. He is a distinguished machine learning superstar, a founding member of OpenAI, the reference human for ImageNet, an ex-Google Brain, ex-DeepMind, ex-Tesla, Mr Autopilot, he has really seen it all. And some months ago, on a memorable day, this special person joined the Cuda Mode Discord to start hacking with others on llm.c, which became one of the greatest and most active community projects on our server. But I guess it's best if he tells the story himself. So please join me in welcoming the incredible one and only, Andrej Karpathy! Wow. Okay. Very impressive. Okay, yeah, I'm very excited to be here. This is my favorite kind of event to present at. So yeah, thank you for the invitation and thank you for running Cuda Mode and putting this on. This is like a wonderful event. Okay. So I'll tell you a bit about LLM.C. So what are we doing? We're training transformers in C and a pinch of C++. Okay, so I'd like to tell the story a little bit of how this project came about and what this looks like from my perspective. So roughly a year ago, I was trying to add a video to my YouTube series and I was trying to teach people LLM training, GPT training and so on. And I was basically hacking on nanoGPT trying to get it to work. So that was me. And then, you've all worked with PyTorch, of course, right? So the trickiness comes that, okay, you have your model, which you've written, that makes sense. But now you have to keep track of a number of abstractions here at the same time. So you have to put it to a device. You want to compile it. You want to wrap it in DDP. And suddenly things start to be a little bit more complicated because I'm not even sure like in what order do you do these? What exactly happens? What are these abstractions? What do they do to your model? So I don't fully understand how any of this works. And then what happens is you want to use your model in different ways. So you want to use it in evaluation, in training, or model inference and so on. And what happened to me is that I was able to train the model, but for some reason eval and inference was not working. And what happened was I was getting some kind of a torch compile error when I was trying to run my eval and my inference. And this is just an illustrative example of a torch compile error. It was something else. I don't remember. I didn't capture it. But both of them were giving me error, inference and eval, and a different error, and I had no idea what was going on. So I did what anyone would do in my position. I went to torch discuss. And I'm looking for PTR BLCK to solve my issue. Unfortunately, PTR BLCK did not have any guidance that I could see on that specific error. So I was kind of stuck, honestly. So two hours later of fighting with torch compile and trying to figure out what the hell was going on, I'm kind of a sad panda. I don't know exactly how to solve this. And so I felt like I was going through these, stages of grief. In the beginning, I was in denial. I was like, this can't be happening to me. I'm not doing anything crazy. I'm just training a little GPT. Like why is this not working? Because this seems really simple. I'm not doing anything crazy. And then eventually I entered the stage of anger. And I was like, okay. You know what? I'm just going to write the whole thing. I understand in my mind what I'm trying to do. Like the computation itself, the algorithm itself is totally clear in my mind. And for some reason, torch compile doesn't let me like use it, run it, et cetera. So I felt a little bit powerless. And I was like, okay. I'm going to take life into my own hands and be in control of my destiny. I'm going to just write this in c, how bad could it be. So let's think about like what is PyTorch offering you, really? And there's many things, but maybe some of the things that are relevant here. I don't know why those bullet points are one on one. I don't know what, on my slides is totally fine, so I don't know what conversion happened here. Okay. But number one, we're getting an array, right? So a very useful n-dimensional array that we can manipulate the operations. If we're going to abandon this, we're going to have to do a lot of pointer arithmetic, basically, making sure that we ravel and unravel indices correctly. Second, we're getting autograd for free. So if we don't have autograd, we need to do forward and backward passes of all the layers. We don't have device, so we have to worry about memory being on the host or on the device and shoveling memory around your different devices between CPU and GPU and So on. We don't have simple dtype conversions, so we have to be very mindful what tensors are stored in what precisions and convert explicitly between them. We don't have torch compile, so we're going to have to do all the kernel fusions that we want manually, and we're going to have to optimize for space and time performance manually. And finally, we don't have distributed, so we're going to have to manually spin up all of our processes, make sure that they can find each other, communicate with nccl, etc. So PyTorch is really, really nice, and this is just some of the things that PyTorch offers. So without PyTorch, we're kind of naked in the world, right? But maybe it's okay. So yeah, how bad could it be? So step one, we have our PyTorch code, which now isn't the primary thing we're working with. It's only a reference that we check correctness with respect to. And so we're in PyTorch land. Everything is nice and clean. We have a little transformer, a few modules, and we're just calling them, so everything is great. And that now becomes our reference in PyTorch. I'd like to just take you through one example of a layer. So for example, layernorm here is like a PyTorch layer, and we'd like to basically port this over to C. So what kind of process do we go through? Well, we're going to iterate through all the layers. Number one, we need the forward pass, and actually I had to write a forward pass of layernorm because PyTorch doesn't just have this kind of implementation in PyTorch of layernorm because it's kind of like a block, and eventually it calls into some CUDA kernels. So I had to write the forward pass of layernorm and make sure it's equivalent to the layernorm in PyTorch. And then, of course, I had to write the backward pass of layernorm. This is where you kind of take out your pen and paper, do some back prop. This is for batchnorm, but layernorm would be similar. And yeah, we have to write the backward pass. And again, this is all still in PyTorch, but it's explicit, and you're just making sure that layer norm of PyTorch, forward and backward, matches this representation, this basically manual, tensor-based implementation. So now we have PyTorch code, forward, backward. So next thing we do is we try to port it to C. And this is actually a lot simpler in many cases than you might think. So on the left, we have the PyTorch code, and on the right, we basically have the equivalent layer norm forward in C. And it's not that crazy, right? So unlike in PyTorch, we just have a bunch of float* arrays. So we have a float* out, float* inputs, outputs, means, standard deviations, weights and biases, and some high parameters. And one thing I really like to do in llm.c is I just want to keep things simple. I don't want to create a tensor abstraction. I don't want to create any abstraction, really. It's just float arrays and operations on float arrays. Like why should it be a lot more complicated than that? So everything is just float arrays. Everything is fully self-contained. There's no underlying representations, abstractions to call, import, et cetera. This is the layernorm forward on float arrays, and that's it. So that's the forward, and then you also do the backward for all the layers. Once we've done that for all the layers and converted everything to C and made sure that everything matches our reference implementation. We have to start to string it together. So we go into our C code in main, and we have to allocate all of the memory that we're going to be using. In llm.c, all of the allocation happens a single time at the beginning. So we pre-plan all of the memory that we're going to ever use. Then it's fixed. And from then on, it's just dynamics of just feeding data through it and training the model. So we have to pre-plan all of the tensors, their sizes. And we have to do that for the parameters. And we have the data grad and the m and v for the AdamW buffers. And then for the activations as well. And we need space for both data and grad. And so you just pre-plan all of the memory. You allocate all of it. And then we need to stitch it all up. So we have all of these layers, and they have a forward and a backward pass in back propagation. And so on the forward pass, just kind of like you allocate all these tensors, and you're very careful, and you index into them properly, and you make sure everything flows correctly through. And you just call the forwards and then all the backwards. And then you're kind of done, and you're left with gradient, and you can do an update. So stringing that together is the second piece of work. And then once we've sort of strung it together, you get something that you can just compile and run. So on the top left is everything that's required. We download a starter pack, which is really just the GPT-2 weights in a single binary file. Very simple. And also we need the data set, in this case, tiny Shakespeare, and the tokenizer and stuff like that. And then we just compile and run this little C code file. It's a single file of C at this point. And I think it's like 2,000 lines or something like that, if I remember correctly. And you run that program, and it like does a little training and outputs some Shakespeare at the end. And then we can verify that the PyTorch code is identical to the C code, and everything is great. We're just running in C. And at this point, I'm actually feeling quite great, because this is amazing. So we have a single file of C. There's no dependencies whatsoever. It compiles instantly. It runs instantly. All of the memory is just allocated in a single blob. So if you start stepping, there's no way you're going to oom later. It's all preplanned. It's fully deterministic. It, in principle, can train GPT-2. It's complete. It will train GPT-2. You just have to wait a long time. And it can run on a potato. It can just run on anything. It's just a single file of C with no dependencies. And in principle, this could run, this would be a great candidate to run on a von Neumann probe, because in space, if we just harden it a little bit more, because you're not going to ship PyTorch code on a von Neumann probe. But I think llm.c is a great candidate for that. So I was feeling great at this point. A fun side note, by the way, all of this work that I described so far happened on a vacation while I was jet lagged in Maldives. So basically, it's perfect, because you wake up at 1 a.m. and there's nothing to do. So you write stuff like LLM.C. And then in sunrise, you go do all the weather activities. So that is the villa where most of LLM.C was trained. So that was perfect. This is a picture of it. And this is a, this is I think the moon is about to set. And the sunrise is about to happen. This is a recommended way to do software development. Okay. So now we have C code, but it's inefficient. So we'd like to run it faster. For that, we reach for GPUs. So we need to convert all of our C code to GPU. So this is where we go to the dev CUDA part of the repo, and we start to develop all the kernels. So here's the layernorm forward pass, as I mentioned. And now we're going to develop a number of kernels that have the identical functionality, but now run on the GPU, and they're going to be faster. And so usually, we have versions one, two, three, four, five, six, et cetera. And these are all different kernel implementations. They're a bit faster usually over time, but they match the specification exactly and give the exact same numbers. So we develop all those layers and port them to CUDA. And this is I don't know what this is. I'm going to skip that. It's like one of the kernels. Basically, the point here is the first kernel is trivial to do usually because you're parallelizing over batch and time, and then you're basically copy pasting the C code into your CUDA kernel. And you're already getting speedups because you're parallelizing over the batch time tokens, and each thread just handles a single output element. So the first kernel is usually trivial, but then optimizations can be pretty elaborate. So by the end, we get to kernel six, for example, in layer norm, and we're doing a lot of things that are a bit more complicated. So we have some, you know, WarpRoduce operations, we have some, we also communicate through shared memory through global memory, we're orchestrating it correctly. Cache streaming hints, and a bunch of little tips and tricks for dealing with everything. And I'm going to go into a bit more detail later. But this. You can get arbitrarily complicated here writing the CUDA code. One thing that I sort of found in this project is that it's not exactly trivial to learn CUDA, unfortunately. And it was like a little bit harder than I expected. I knew some CUDA going in, but getting better at it, I think is not trivial. I think some of these books, unfortunately, are a bit out of date, as you might know. PMPP is actually quite good. But also, I think still kind of like, mostly on the beginner level, because a lot of the CUDA code that we ended up developing in the lifetime of the llm.c project, you would not find those things in this book, actually. So a lot of the kernels that we ended up adding would just not be covered. And then on top of that, you have this CUDA C++ programming guide, which frankly is not exactly readable for someone who is like a bit new to that, to CUDA. And then you have this amazing blog post from Simon, who's at Anthropic, that is like way better than anything we deserve just like randomly on the internet. So that was incredible. And if there was just more of that, that would be so amazing. But yeah. So I think I found it a little bit difficult, but I mean, I'm hoping that things like CUDA mode can definitely speed up the availability of writing CUDA. Okay, so next what happened is I was basically struggling with the CUDA code a little bit, and I was reading through the book, and I was implementing all these CUDA kernels, and they're like okay CUDA kernels, but they're not great. And so a team of Avengers assembled from the internet when they saw that CN started contributing, so specifically Eric, Arun, Alexa, are kind of like I would say core devs of LLM.C and have contributed a ton of work to LLM.C. And they started to like really optimize and write all these kernels, and this was incredible to watch and learn a lot from. And there's many more, Ross Wheeler and Chinthesil and a few others. Over time we have 60 contributors to the LLM.C project. Shout out to Lambda for sponsoring LLM.C. They contribute compute so that we can run and optimize all these kernels. So it was amazing for me that people just came from the internet and helped out on the project. And this is one of the favorite things that can happen, my favorite things that can happen with an open source MIT-licensed repo. People just come from the internet and help contribute. It's amazing. Okay, so we've converted all the layers to CUDA. We have now all the kernels, and we can now train on a single GPU in FP32 so far. So that's great. So from then on we start to make more and more optimizations. So number one, we don't want to have matmuls in FP32 when you roll your own code. We actually switched to CUBLAS. Step two, we don't want to write our own flash attention. I think that would be pretty complicated. It turns out CUDNN has a very good flash attention implementation, so we switched to that. Next, you want to definitely reach for mixed precision, so that to speed up the code. So you want to go over all your tensors for parameters and also for activations and so on, and you have to start to think about, okay, which ones are in float 32, which ones are in bfloat 16, and what precision are they in, and then do all the conversions automatically. So we reached for that and implemented that. There's many, many other optimizations that we ended up implementing over time. So as an example, we did all the kernel fusions, different recompute settings to recompute a piece of the forward pass during the backward. We, there's been a lot of optimizations from Eric, especially on minimizing the amount of memory that you need during the backward pass. We have this, like, packed 128 data structure, which basically, in our experience, forces the compiler to use the 128-bit load and store instructions that are available, but somehow the compiler is unwilling to use in many cases. So I think Arun did a lot of work here where you just look at the SAS, and you look at ,the SAS as the assembly, and you are looking at what instructions are being used for your loop, and you figure out that, okay, there should be a 128-bit load and store, but it happens to be a 32-bit or something else because something in the NVCC compiler is not going very well. So we found that this data structure kind of forces the compiler's hand a bit more. We implemented all kinds of CUDA streams to overlap. The part of the computation, and this ended up creating a total disaster. And so that's why I scratched it out, because at one point of LLM.C, as Arun would say, I basically went in and I nuked it from orbit. I just went in and I control-F for all dimensions of stream, and I just delete, delete, delete. And basically I deleted all the streams, made everything single-threaded, because we ended up getting all kinds of really weird race conditions and errors and so on, and I just didn't want to deal with it. So LLM.C is not actually as overlapped as it could be, but it's just like, too much complexity for not enough gain at this point. But maybe we can slowly reintroduce some of it. We have stochastic rounding, we have full determinism. Full determinism turns out to be pretty hard, because some of the kernels complexify a lot, because you can't use atomics. Like the encoder backward was especially crazy, because encoder backward is trivial with atomics, but non-trivial without it. Anyway, so a lot of the optimizations went into with a lot of efficiency and determinism in mind. And accuracy, like stochastic rounding and so on. Next, you want to use multiple GPUs, not just a single GPU. So this is where you bring in NCCL, you start to do AllReduce between all the different workers. And this is where you also start to reach for like sharded optimizer state, ZeRO-1. Where basically you take your optimizer states, which are in float, and these are really large buffers for AdamW, and you can actually spread out a lot of the stuff across all the GPUs, and it really helps to keep your requirements down per GPU in terms of memory. So very helpful to reach for that. So currently, LLM.C uses ZeRO-1, which is a sharded optimizer state. There's a PR for ZeRO-2, but I don't believe I merged that yet, because it gets a little bit messy, but might be merged eventually. A lot of LLM.c is just kind of like balancing the improvement in speed with the complexity of what you're actually introducing. And so I've actually rejected a lot of PRs because of that, because the code starts to get crazy, and I think that decreases the amount of people that can be onboarding the project. And then after multi-GPU, you have multi-node, so now you are running across multiple machines, you have to make sure that you synchronize all of them, that they can find each other and so on. So we implemented all that. And where that leads us to is that we can actually train GPT-2, and we can actually reproduce it after all of that work. So there's a post in the discussions of LLM.C. We can train the 1.6 billion GPT-2, which was state-of-the-art LLM as of 2019 or so, and you can train it on a single node of H100s in about 24 hours, and that costs roughly $600. And the way you do that is it's extremely dependency-free. There's no need for Python, no need for PyTorch. So you do need cudnn, which is the most heavy dependency, but cudnn is optional. So if you'd like to roll your own manual attention, that is possible in LLM.C. But cudnn is kind of like the hairiest dependency, but after that, it's just a bunch of C code. You compile it and you run it. There's no need for really anything. So there's no need for conda environments, pip installs. There's just nothing, which is amazing. And then you compile your code, then you run it, and it starts stepping. You wait 24 hours, and then it's stepping. Print some diagnostics. We have almost a 50% MFU here on one node, which is quite good. And you get really nice plots, and you beat GPT-2 on hellaswag. And basically, this just indicates that the optimization went well. No crazy numerical issues, loss spikes or anything like that for this size. And yeah, achieving a really good model in LLM.C. We can still compare to PyTorch, because remember, we have PyTorch implementation for all this stuff in parallel on the side. And so you can run the equivalent training loop almost in PyTorch, and we can compare the two implementations side by side. And in particular, at the time of writing that post, and I don't know if this has changed because the PyTorch team continues to optimize things over time, but at the time of that post, we were using in LLM.C 30% less memory, and we were 20% faster in training, just a throughput. And I don't know if I fully, super-duper optimized the PyTorch implementation. I did my personal best. But we were able to, I think, beat PyTorch in training of specifically GPT-2 in LLM.C. If you want to train anything else, you're in a lot of trouble. But you have to change your code a lot. And we're doing that, and I'll come back to it. But for GPT-2 training, we're better, after all that work. And it also compiles and runs much faster, which is beautiful. Torch compile actually takes quite a bit of time, like a minute or something. You're just waiting. So that's also something that I personally don't like to work with usually. Okay. So looping back around, turns out it wasn't all that simple. There was a lot of stuff involved, and it took a few months for a few people. But it was fun. We learned a lot, and we made friends along the way. This is the LLM.c core devs. So it was great. Ongoing work. We are adding LLAMA3 support. We actually thought maybe we would have it done by today, but there's a few more, a little bit more work to do. But we will have LLAMA3.1 training in LLM.C very, very soon. We will have FP8 support, so Arun has been working on this. And there's a big PR that's coming for FP8 support, which is also interesting. And there's a lot of notable forks of LLM.C. They're all listed on the GitHub repo. The AMD fork is very active, as far as I understand, and quite good. I think also the C++ CUDA fork is quite nice. And so a lot of forks. So I encourage you to also fork LLM.C. It's fairly readable, I think. I tried to keep it clean, well documented. I think it's pretty well understood. It's pretty well understood what's in there. It's only maybe like, I think, 3,000 lines of code of basically C mostly. And one more thought I think that I wanted to get across is it wasn't all that haphazard to start the project. I had another motivation for starting the project. And that's that I think, I mean, what is LLM.C? Like if PyTorch is, especially Torch Compile, is a bit like GCC for software 2.0, it's a compiler, then LLM.C is a bit like writing assembly. We're doing everything manually, right? Right. And basically I think we wrote LLM.C as multiple people over a duration of three months and got something that was faster than PyTorch in a specific setting of GPT-2 training. And so what this, this exercise basically proves that this is possible. Now the problem is you need to spend multiple people several months. But if LLMs are about to become much better at coding over time, then I think you can expect that the LLM could actually do this for any custom application over time. And so the LLMs could act as a kind of compiler for any custom application you're interested in. They're going to do all the LLMC work and they're going to output a binary that you can compile and run for your specific applications. So I don't actually know if we, like the use of Python and PyTorch and everything else is just a crutch because we humans are finite. We have finite knowledge, intelligence, and attention. But actually don't you want to write all code in custom CUDA kernels and so on? Like maybe. And so the other thing that I think is interesting is the LLM.C repo might be useful because in the early stages of these LLMs, and their intelligence, they might not be able to write this code from scratch if you just prompted them write GPT-2 IN C. You probably won't get LLM.C. But you're a lot more likely to get it if you put LLM.C in the context of such an LLM. And you can expect that a few-shot learning would be very helpful for the LLM to basically give it example code. And so I think LLM.C could be very useful for this example code to get to the LLMs as they're about to write all of our custom applications. And so I think this is actually not unlikely to happen. Yeah, this is kind of likely to happen. So I think software development in general will probably change a lot. And to me, LLM.C is an exploration of whether this is even possible. Because if it is possible, then maybe this is what's going to happen. So yeah, that's it. Thank you. |